1. 22 2月, 2012 6 次提交
  2. 21 2月, 2012 1 次提交
  3. 14 2月, 2012 1 次提交
  4. 13 2月, 2012 1 次提交
  5. 03 2月, 2012 1 次提交
  6. 08 1月, 2012 1 次提交
  7. 04 1月, 2012 1 次提交
  8. 21 12月, 2011 15 次提交
    • T
      tracing: Factorize filter creation · 38b78eb8
      Tejun Heo 提交于
      There are four places where new filter for a given filter string is
      created, which involves several different steps.  This patch factors
      those steps into create_[system_]filter() functions which in turn make
      use of create_filter_{start|finish}() for common parts.
      
      The only functional change is that if replace_filter_string() is
      requested and fails, creation fails without any side effect instead of
      being ignored.
      
      Note that system filter is now installed after the processing is
      complete which makes freeing before and then restoring filter string
      on error unncessary.
      
      -v2: Rebased to resolve conflict with 49aa2951 and updated both
           create_filter() functions to always set *filterp instead of
           requiring the caller to clear it to %NULL on entry.
      
      Link: http://lkml.kernel.org/r/1323988305-1469-2-git-send-email-tj@kernel.orgSigned-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38b78eb8
    • S
      tracing: Have stack tracing set filtered functions at boot · 762e1207
      Steven Rostedt 提交于
      Add stacktrace_filter= to the kernel command line that lets
      the user pick specific functions to check the stack on.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      762e1207
    • S
      ftrace: Allow access to the boot time function enabling · 2a85a37f
      Steven Rostedt 提交于
      Change set_ftrace_early_filter() to ftrace_set_early_filter()
      and make it a global function. This will allow other subsystems
      in the kernel to be able to enable function tracing at start
      up and reuse the ftrace function parsing code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a85a37f
    • S
      tracing: Have stack_tracer use a separate list of functions · d2d45c7a
      Steven Rostedt 提交于
      The stack_tracer is used to look at every function and check
      if the current stack is bigger than the last recorded max stack size.
      When a new max is found, then it saves that stack off.
      
      Currently the stack tracer is limited by the global_ops of
      the function tracer. As the stack tracer has nothing to do with
      the ftrace function tracer, except that it uses it as its internal
      engine, the stack tracer should have its own list.
      
      A new file is added to the tracing debugfs directory called:
      
        stack_trace_filter
      
      that can be used to select which functions you want to check the stack
      on.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d2d45c7a
    • S
      ftrace: Decouple hash items from showing filtered functions · 69a3083c
      Steven Rostedt 提交于
      The set_ftrace_filter shows "hashed" functions, which are functions
      that are added with operations to them (like traceon and traceoff).
      
      As other subsystems may be able to show what functions they are
      using for function tracing, the hash items should no longer
      be shown just because the FILTER flag is set. As they have nothing
      to do with other subsystems filters.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      69a3083c
    • S
      ftrace: Allow other users of function tracing to use the output listing · fc13cb0c
      Steven Rostedt 提交于
      The function tracer is set up to allow any other subsystem (like perf)
      to use it. Ftrace already has a way to list what functions are enabled
      by the global_ops. It would be very helpful to let other users of
      the function tracer to be able to use the same code.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fc13cb0c
    • S
      ftrace: Create ftrace_hash_empty() helper routine · 06a51d93
      Steven Rostedt 提交于
      There are two types of hashes in the ftrace_ops; one type
      is the filter_hash and the other is the notrace_hash. Either
      one may be null, meaning it has no elements. But when elements
      are added, the hash is allocated.
      
      Throughout the code, a check needs to be made to see if a hash
      exists or the hash has elements, but the check if the hash exists
      is usually missing causing the possible "NULL pointer dereference bug".
      
      Add a helper routine called "ftrace_hash_empty()" that returns
      true if the hash doesn't exist or its count is zero. As they mean
      the same thing.
      Last-bug-reported-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06a51d93
    • S
      ftrace: Fix ftrace hash record update with notrace · c842e975
      Steven Rostedt 提交于
      When disabling the "notrace" records, that means we want to trace them.
      If the notrace_hash is zero, it means that we want to trace all
      records. But to disable a zero notrace_hash means nothing.
      
      The check for the notrace_hash count was incorrect with:
      
      	if (hash && !hash->count)
      		return
      
      With the correct comment above it that states that we do nothing
      if the notrace_hash has zero count. But !hash also means that
      the notrace hash has zero count. I think this was done to
      protect against dereferencing NULL. But if !hash is true, then
      we go through the following loop without doing a single thing.
      
      Fix it to:
      
      	if (!hash || !hash->count)
      		return;
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c842e975
    • S
      ftrace: Use bsearch to find record ip · 5855fead
      Steven Rostedt 提交于
      Now that each set of pages in the function list are sorted by
      ip, we can use bsearch to find a record within each set of pages.
      This speeds up the ftrace_location() function by magnitudes.
      
      For archs (like x86) that need to add a breakpoint at every function
      that will be converted from a nop to a callback and vice versa,
      the breakpoint callback needs to know if the breakpoint was for
      ftrace or not. It requires finding the breakpoint ip within the
      records. Doing a linear search is extremely inefficient. It is
      a must to be able to do a fast binary search to find these locations.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5855fead
    • S
      ftrace: Sort the mcount records on each page · 68950619
      Steven Rostedt 提交于
      Sort records by ip locations of the ftrace mcount calls on each of the
      set of pages in the function list. This helps in localizing cache
      usuage when updating the function locations, as well as gives us
      the ability to quickly find an ip location in the list.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      68950619
    • S
      ftrace: Replace record newlist with record page list · 85ae32ae
      Steven Rostedt 提交于
      As new functions come in to be initalized from mcount to nop,
      they are done by groups of pages. Whether it is the core kernel
      or a module. There's no need to keep track of these on a per record
      basis.
      
      At startup, and as any module is loaded, the functions to be
      traced are stored in a group of pages and added to the function
      list at the end. We just need to keep a pointer to the first
      page of the list that was added, and use that to know where to
      start on the list for initializing functions.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85ae32ae
    • S
      ftrace: Allocate the mcount record pages as groups · a7900875
      Steven Rostedt 提交于
      Allocate the mcount record pages as a group of pages as big
      as can be allocated and waste no more than a single page.
      
      Grouping the mcount pages as much as possible helps with cache
      locality, as we do not need to redirect with descriptors as we
      cross from page to page. It also allows us to do more with the
      records later on (sort them with bigger benefits).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7900875
    • S
      ftrace: Remove usage of "freed" records · 32082309
      Steven Rostedt 提交于
      Records that are added to the function trace table are
      permanently there, except for modules. By separating out the
      modules to their own pages that can be freed in one shot
      we can remove the "freed" flag and simplify some of the record
      management.
      
      Another benefit of doing this is that we can also move the
      records around; sort them.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      32082309
    • S
      ftrace: Allow archs to modify code without stop machine · c88fd863
      Steven Rostedt 提交于
      The stop machine method to modify all functions in the kernel
      (some 20,000 of them) is the safest way to do so across all archs.
      But some archs may not need this big hammer approach to modify code
      on SMP machines, and can simply just update the code it needs.
      
      Adding a weak function arch_ftrace_update_code() that now does the
      stop machine, will also let any arch override this method.
      
      If the arch needs to check the system and then decide if it can
      avoid stop machine, it can still call ftrace_run_stop_machine() to
      use the old method.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c88fd863
    • J
      ftrace: Fix unregister ftrace_ops accounting · 30fb6aa7
      Jiri Olsa 提交于
      Multiple users of the function tracer can register their functions
      with the ftrace_ops structure. The accounting within ftrace will
      update the counter on each function record that is being traced.
      When the ftrace_ops filtering adds or removes functions, the
      function records will be updated accordingly if the ftrace_ops is
      still registered.
      
      When a ftrace_ops is removed, the counter of the function records,
      that the ftrace_ops traces, are decremented. When they reach zero
      the functions that they represent are modified to stop calling the
      mcount code.
      
      When changes are made, the code is updated via stop_machine() with
      a command passed to the function to tell it what to do. There is an
      ENABLE and DISABLE command that tells the called function to enable
      or disable the functions. But the ENABLE is really a misnomer as it
      should just update the records, as records that have been enabled
      and now have a count of zero should be disabled.
      
      The DISABLE command is used to disable all functions regardless of
      their counter values. This is the big off switch and is not the
      complement of the ENABLE command.
      
      To make matters worse, when a ftrace_ops is unregistered and there
      is another ftrace_ops registered, neither the DISABLE nor the
      ENABLE command are set when calling into the stop_machine() function
      and the records will not be updated to match their counter. A command
      is passed to that function that will update the mcount code to call
      the registered callback directly if it is the only one left. This
      means that the ftrace_ops that is still registered will have its callback
      called by all functions that have been set for it as well as the ftrace_ops
      that was just unregistered.
      
      Here's a way to trigger this bug. Compile the kernel with
      CONFIG_FUNCTION_PROFILER set and with CONFIG_FUNCTION_GRAPH not set:
      
       CONFIG_FUNCTION_PROFILER=y
       # CONFIG_FUNCTION_GRAPH is not set
      
      This will force the function profiler to use the function tracer instead
      of the function graph tracer.
      
        # cd /sys/kernel/debug/tracing
        # echo schedule > set_ftrace_filter
        # echo function > current_tracer
        # cat set_ftrace_filter
       schedule
        # cat trace
       # tracer: nop
       #
       # entries-in-buffer/entries-written: 692/68108025   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
            kworker/0:2-909   [000] ....   531.235574: schedule <-worker_thread
                 <idle>-0     [001] .N..   531.235575: schedule <-cpu_idle
            kworker/0:2-909   [000] ....   531.235597: schedule <-worker_thread
                   sshd-2563  [001] ....   531.235647: schedule <-schedule_hrtimeout_range_clock
      
        # echo 1 > function_profile_enabled
        # echo 0 > function_porfile_enabled
        # cat set_ftrace_filter
       schedule
        # cat trace
       # tracer: function
       #
       # entries-in-buffer/entries-written: 159701/118821262   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
                 <idle>-0     [002] ...1   604.870655: local_touch_nmi <-cpu_idle
                 <idle>-0     [002] d..1   604.870655: enter_idle <-cpu_idle
                 <idle>-0     [002] d..1   604.870656: atomic_notifier_call_chain <-enter_idle
                 <idle>-0     [002] d..1   604.870656: __atomic_notifier_call_chain <-atomic_notifier_call_chain
      
      The same problem could have happened with the trace_probe_ops,
      but they are modified with the set_frace_filter file which does the
      update at closure of the file.
      
      The simple solution is to change ENABLE to UPDATE and call it every
      time an ftrace_ops is unregistered.
      
      Link: http://lkml.kernel.org/r/1323105776-26961-3-git-send-email-jolsa@redhat.com
      
      Cc: stable@vger.kernel.org # 3.0+
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      30fb6aa7
  9. 12 12月, 2011 1 次提交
  10. 06 12月, 2011 4 次提交
  11. 02 12月, 2011 1 次提交
  12. 18 11月, 2011 2 次提交
    • H
      trace: Include <asm/asm-offsets.h> in trace_syscalls.c · d5e553d6
      H. Peter Anvin 提交于
      Include <asm/asm-offsets.h> into trace_syscalls.c, to allow for
      NR_syscalls to be automatically generated.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d5e553d6
    • S
      tracing: Add entries in buffer and total entries to default output header · 39eaf7ef
      Steven Rostedt 提交于
      Knowing the number of event entries in the ring buffer compared
      to the total number that were written is useful information. The
      latency format gives this information and there's no reason that the
      default format does not.
      
      This information is now added to the default header, along with the
      number of online CPUs:
      
       # tracer: nop
       #
       # entries-in-buffer/entries-written: 159836/64690869   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
                 <idle>-0     [000] ...2    49.442971: local_touch_nmi <-cpu_idle
                 <idle>-0     [000] d..2    49.442973: enter_idle <-cpu_idle
                 <idle>-0     [000] d..2    49.442974: atomic_notifier_call_chain <-enter_idle
                 <idle>-0     [000] d..2    49.442976: __atomic_notifier_call_chain <-atomic_notifier
      
      The above shows that the trace contains 159836 entries, but
      64690869 were written. One could figure out that there were
      64531033 entries that were dropped.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      39eaf7ef
  13. 17 11月, 2011 1 次提交
    • S
      tracing: Add irq, preempt-count and need resched info to default trace output · 77271ce4
      Steven Rostedt 提交于
      People keep asking how to get the preempt count, irq, and need resched info
      and we keep telling them to enable the latency format. Some developers think
      that traces without this info is completely useless, and for a lot of tasks
      it is useless.
      
      The first option was to enable the latency trace as the default format, but
      the header for the latency format is pretty useless for most tracers and
      it also does the timestamp in straight microseconds from the time the trace
      started. This is sometimes more difficult to read as the default trace is
      seconds from the start of boot up.
      
      Latency format:
      
       # tracer: nop
       #
       # nop latency trace v1.1.5 on 3.2.0-rc1-test+
       # --------------------------------------------------------------------
       # latency: 0 us, #159771/64234230, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #
       #                  _------=> CPU#
       #                 / _-----=> irqs-off
       #                | / _----=> need-resched
       #                || / _---=> hardirq/softirq
       #                ||| / _--=> preempt-depth
       #                |||| /     delay
       #  cmd     pid   ||||| time  |   caller
       #     \   /      |||||  \    |   /
       migratio-6       0...2 41778231us+: rcu_note_context_switch <-__schedule
       migratio-6       0...2 41778233us : trace_rcu_utilization <-rcu_note_context_switch
       migratio-6       0...2 41778235us+: rcu_sched_qs <-rcu_note_context_switch
       migratio-6       0d..2 41778236us+: rcu_preempt_qs <-rcu_note_context_switch
       migratio-6       0...2 41778238us : trace_rcu_utilization <-rcu_note_context_switch
       migratio-6       0...2 41778239us+: debug_lockdep_rcu_enabled <-__schedule
      
      default format:
      
       # tracer: nop
       #
       #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
       #              | |       |          |         |
            migration/0-6     [000]    50.025810: rcu_note_context_switch <-__schedule
            migration/0-6     [000]    50.025812: trace_rcu_utilization <-rcu_note_context_switch
            migration/0-6     [000]    50.025813: rcu_sched_qs <-rcu_note_context_switch
            migration/0-6     [000]    50.025815: rcu_preempt_qs <-rcu_note_context_switch
            migration/0-6     [000]    50.025817: trace_rcu_utilization <-rcu_note_context_switch
            migration/0-6     [000]    50.025818: debug_lockdep_rcu_enabled <-__schedule
            migration/0-6     [000]    50.025820: debug_lockdep_rcu_enabled <-__schedule
      
      The latency format header has latency information that is pretty meaningless
      for most tracers. Although some of the header is useful, and we can add that
      later to the default format as well.
      
      What is really useful with the latency format is the irqs-off, need-resched
      hard/softirq context and the preempt count.
      
      This commit adds the option irq-info which is on by default that adds this
      information:
      
       # tracer: nop
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
                 <idle>-0     [000] d..2    49.309305: cpuidle_get_driver <-cpuidle_idle_call
                 <idle>-0     [000] d..2    49.309307: mwait_idle <-cpu_idle
                 <idle>-0     [000] d..2    49.309309: need_resched <-mwait_idle
                 <idle>-0     [000] d..2    49.309310: test_ti_thread_flag <-need_resched
                 <idle>-0     [000] d..2    49.309312: trace_power_start.constprop.13 <-mwait_idle
                 <idle>-0     [000] d..2    49.309313: trace_cpu_idle <-mwait_idle
                 <idle>-0     [000] d..2    49.309315: need_resched <-mwait_idle
      
      If a user wants the old format, they can disable the 'irq-info' option:
      
       # tracer: nop
       #
       #           TASK-PID   CPU#      TIMESTAMP  FUNCTION
       #              | |       |          |         |
                 <idle>-0     [000]     49.309305: cpuidle_get_driver <-cpuidle_idle_call
                 <idle>-0     [000]     49.309307: mwait_idle <-cpu_idle
                 <idle>-0     [000]     49.309309: need_resched <-mwait_idle
                 <idle>-0     [000]     49.309310: test_ti_thread_flag <-need_resched
                 <idle>-0     [000]     49.309312: trace_power_start.constprop.13 <-mwait_idle
                 <idle>-0     [000]     49.309313: trace_cpu_idle <-mwait_idle
                 <idle>-0     [000]     49.309315: need_resched <-mwait_idle
      Requested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      77271ce4
  14. 08 11月, 2011 3 次提交
    • J
      tracing/latency: Fix header output for latency tracers · 7e9a49ef
      Jiri Olsa 提交于
      In case the the graph tracer (CONFIG_FUNCTION_GRAPH_TRACER) or even the
      function tracer (CONFIG_FUNCTION_TRACER) are not set, the latency tracers
      do not display proper latency header.
      
      The involved/fixed latency tracers are:
              wakeup_rt
              wakeup
              preemptirqsoff
              preemptoff
              irqsoff
      
      The patch adds proper handling of tracer configuration options for latency
      tracers, and displaying correct header info accordingly.
      
      * The current output (for wakeup tracer) with both graph and function
        tracers disabled is:
      
        # tracer: wakeup
        #
          <idle>-0       0d.h5    1us+:      0:120:R   + [000]     7:  0:R watchdog/0
          <idle>-0       0d.h5    3us+: ttwu_do_activate.clone.1 <-try_to_wake_up
          ...
      
      * The fixed output is:
      
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 55 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: migration/0-6 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
             cat-1129    0d..4    1us :   1129:120:R   + [000]     6:  0:R migration/0
             cat-1129    0d..4    2us+: ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The current output (for wakeup tracer) with only function
        tracer enabled is:
      
        # tracer: wakeup
        #
             cat-1140    0d..4    1us+:   1140:120:R   + [000]     6:  0:R migration/0
             cat-1140    0d..4    2us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The fixed output is:
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 207 us, #109/109, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: watchdog/1-12 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
          <idle>-0       1d.h5    1us+:      0:120:R   + [001]    12:  0:R watchdog/1
          <idle>-0       1d.h5    3us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      Link: http://lkml.kernel.org/r/20111107150849.GE1807@m.brq.redhat.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7e9a49ef
    • S
      ftrace: Fix hash record accounting bug · d4d34b98
      Steven Rostedt 提交于
      If the set_ftrace_filter is cleared by writing just whitespace to
      it, then the filter hash refcounts will be decremented but not
      updated. This causes two bugs:
      
      1) No functions will be enabled for tracing when they all should be
      
      2) If the users clears the set_ftrace_filter twice, it will crash ftrace:
      
      ------------[ cut here ]------------
      WARNING: at /home/rostedt/work/git/linux-trace.git/kernel/trace/ftrace.c:1384 __ftrace_hash_rec_update.part.27+0x157/0x1a7()
      Modules linked in:
      Pid: 2330, comm: bash Not tainted 3.1.0-test+ #32
      Call Trace:
       [<ffffffff81051828>] warn_slowpath_common+0x83/0x9b
       [<ffffffff8105185a>] warn_slowpath_null+0x1a/0x1c
       [<ffffffff810ba362>] __ftrace_hash_rec_update.part.27+0x157/0x1a7
       [<ffffffff810ba6e8>] ? ftrace_regex_release+0xa7/0x10f
       [<ffffffff8111bdfe>] ? kfree+0xe5/0x115
       [<ffffffff810ba51e>] ftrace_hash_move+0x2e/0x151
       [<ffffffff810ba6fb>] ftrace_regex_release+0xba/0x10f
       [<ffffffff8112e49a>] fput+0xfd/0x1c2
       [<ffffffff8112b54c>] filp_close+0x6d/0x78
       [<ffffffff8113a92d>] sys_dup3+0x197/0x1c1
       [<ffffffff8113a9a6>] sys_dup2+0x4f/0x54
       [<ffffffff8150cac2>] system_call_fastpath+0x16/0x1b
      ---[ end trace 77a3a7ee73794a02 ]---
      
      Link: http://lkml.kernel.org/r/20111101141420.GA4918@debianReported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d4d34b98
    • S
      ftrace: Remove force undef config value left for testing · 8ee3c92b
      Steven Rostedt 提交于
      A forced undef of a config value was used for testing and was
      accidently left in during the final commit. This causes x86 to
      run slower than needed while running function tracing as well
      as causes the function graph selftest to fail when DYNMAIC_FTRACE
      is not set. This is because the code in MCOUNT expects the ftrace
      code to be processed with the config value set that happened to
      be forced not set.
      
      The forced config option was left in by:
          commit 6331c28c
          ftrace: Fix dynamic selftest failure on some archs
      
      Link: http://lkml.kernel.org/r/20111102150255.GA6973@debian
      
      Cc: stable@vger.kernel.org
      Reported-by: NRabin Vincent <rabin@rab.in>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8ee3c92b
  15. 05 11月, 2011 1 次提交
    • S
      tracing: Add boiler plate for subsystem filter · 49aa2951
      Steven Rostedt 提交于
      The system filter can be used to set multiple event filters that
      exist within the system. But currently it displays the last filter
      written that does not necessarily correspond to the filters within
      the system. The system filter itself is not used to filter any events.
      The system filter is just a means to set filters of the events within
      it.
      
      Because this causes an ambiguous state when the system filter reads
      a filter string but the events within the system have different strings
      it is best to just show a boiler plate:
      
       ### global filter ###
       # Use this to set filters for multiple events.
       # Only events with the given fields will be affected.
       # If no events are modified, an error message will be displayed here.
      
      If an error occurs while writing to the system filter, the system
      filter will replace the boiler plate with the error message as it
      currently does.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      49aa2951