“c2a6a5528f3144cf42b8b4d0655106f573f71fae”上不存在“drivers/gpu/drm/omapdrm/omap_drv.c”
  1. 21 4月, 2017 6 次提交
  2. 25 3月, 2017 1 次提交
  3. 20 6月, 2016 1 次提交
  4. 09 3月, 2016 1 次提交
    • C
      tracing: Make tracer_flags use the right set_flag callback · d39cdd20
      Chunyu Hu 提交于
      When I was updating the ftrace_stress test of ltp. I encountered
      a strange phenomemon, excute following steps:
      
      echo nop > /sys/kernel/debug/tracing/current_tracer
      echo 0 > /sys/kernel/debug/tracing/options/funcgraph-cpu
      bash: echo: write error: Invalid argument
      
      check dmesg:
      [ 1024.903855] nop_test_refuse flag set to 0: we refuse.Now cat trace_options to see the result
      
      The reason is that the trace option test will randomly setup trace
      option under tracing/options no matter what the current_tracer is.
      but the set_tracer_option is always using the set_flag callback
      from the current_tracer. This patch adds a pointer to tracer_flags
      and make it point to the tracer it belongs to. When the option is
      setup, the set_flag of the right tracer will be used no matter
      what the the current_tracer is.
      
      And the old dummy_tracer_flags is used for all the tracers which
      doesn't have a tracer_flags, having issue to use it to save the
      pointer of a tracer. So remove it and use dynamic dummy tracer_flags
      for tracers needing a dummy tracer_flags, as a result, there are no
      tracers sharing tracer_flags, so remove the check code.
      
      And save the current tracer to trace_option_dentry seems not good as
      it may waste mem space when mount the debug/trace fs more than one time.
      
      Link: http://lkml.kernel.org/r/1457444222-8654-1-git-send-email-chuhu@redhat.comSigned-off-by: NChunyu Hu <chuhu@redhat.com>
      [ Fixed up function tracer options to work with the change ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d39cdd20
  5. 20 11月, 2014 1 次提交
  6. 19 11月, 2014 1 次提交
    • S
      tracing: Fix race of function probes counting · a9ce7c36
      Steven Rostedt (Red Hat) 提交于
      The function probe counting for traceon and traceoff suffered a race
      condition where if the probe was executing on two or more CPUs at the
      same time, it could decrement the counter by more than one when
      disabling (or enabling) the tracer only once.
      
      The way the traceon and traceoff probes are suppose to work is that
      they disable (or enable) tracing once per count. If a user were to
      echo 'schedule:traceoff:3' into set_ftrace_filter, then when the
      schedule function was called, it would disable tracing. But the count
      should only be decremented once (to 2). Then if the user enabled tracing
      again (via tracing_on file), the next call to schedule would disable
      tracing again and the count would be decremented to 1.
      
      But if multiple CPUS called schedule at the same time, it is possible
      that the count would be decremented more than once because of the
      simple "count--" used.
      
      By reading the count into a local variable and using memory barriers
      we can guarantee that the count would only be decremented once per
      disable (or enable).
      
      The stack trace probe had a similar race, but here the stack trace will
      decrement for each time it is called. But this had the read-modify-
      write race, where it could stack trace more than the number of times
      that was specified. This case we use a cmpxchg to stack trace only the
      number of times specified.
      
      The dump probes can still use the old "update_count()" function as
      they only run once, and that is controlled by the dump logic
      itself.
      
      Link: http://lkml.kernel.org/r/20141118134643.4b550ee4@gandalf.local.homeSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a9ce7c36
  7. 14 11月, 2014 1 次提交
  8. 30 4月, 2014 1 次提交
  9. 22 4月, 2014 1 次提交
    • S
      ftrace: Remove global function list and call function directly · 4104d326
      Steven Rostedt (Red Hat) 提交于
      Instead of having a list of global functions that are called,
      as only one global function is allow to be enabled at a time, there's
      no reason to have a list.
      
      Instead, simply have all the users of the global ops, use the global ops
      directly, instead of registering their own ftrace_ops. Just switch what
      function is used before enabling the function tracer.
      
      This removes a lot of code as well as the complexity involved with it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4104d326
  10. 17 4月, 2014 1 次提交
    • S
      tracing: Do not try to recreated toplevel set_ftrace_* files · 5d6c97c5
      Steven Rostedt (Red Hat) 提交于
      With the restructing of the function tracer working with instances, the
      "top level" buffer is a bit special, as the function tracing is mapped
      to the same set of filters. This is done by using a "global_ops" descriptor
      and having the "set_ftrace_filter" and "set_ftrace_notrace" map to it.
      
      When an instance is created, it creates the same files but its for the
      local instance and not the global_ops.
      
      The issues is that the local instance creation shares some code with
      the global instance one and we end up trying to create th top level
      "set_ftrace_*" files twice, and on boot up, we get an error like this:
      
       Could not create debugfs 'set_ftrace_filter' entry
       Could not create debugfs 'set_ftrace_notrace' entry
      
      The reason they failed to be created was because they were created
      twice, and the second time gives this error as you can not create the
      same file twice.
      Reported-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5d6c97c5
  11. 21 2月, 2014 3 次提交
  12. 19 7月, 2013 1 次提交
  13. 12 6月, 2013 2 次提交
    • S
      tracing: Add function probe to trigger a ftrace dump of current CPU trace · 90e3c03c
      Steven Rostedt (Red Hat) 提交于
      Add the "cpudump" command to have the current CPU ftrace buffer dumped
      to console if a function is hit. This is useful when debugging a
      tripple fault, where you have an idea of a function that is called
      just before the tripple fault occurs, and can tell ftrace to dump its
      content out to the console before it continues.
      
      This differs from the "dump" command as it only dumps the content of
      the ring buffer for the currently executing CPU, and does not show
      the contents of the other CPUs.
      
      Format is:
      
        <function>:cpudump
      
      echo 'bad_address:cpudump' > /debug/tracing/set_ftrace_filter
      
      To remove this:
      
      echo '!bad_address:cpudump' > /debug/tracing/set_ftrace_filter
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      90e3c03c
    • S
      tracing: Add function probe to trigger a ftrace dump to console · ad71d889
      Steven Rostedt (Red Hat) 提交于
      Add the "dump" command to have the ftrace buffer dumped to console if
      a function is hit. This is useful when debugging a tripple fault,
      where you have an idea of a function that is called just before the
      tripple fault occurs, and can tell ftrace to dump its content out
      to the console before it continues.
      
      Format is:
      
        <function>:dump
      
      echo 'bad_address:dump' > /debug/tracing/set_ftrace_filter
      
      To remove this:
      
      echo '!bad_address:dump' > /debug/tracing/set_ftrace_filter
      Requested-by: NLuis Claudio R. Goncalves <lclaudio@uudg.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ad71d889
  14. 15 3月, 2013 6 次提交
    • S
      tracing: Add function probe to trigger stack traces · dd42cd3e
      Steven Rostedt (Red Hat) 提交于
      Add a function probe that will cause a stack trace to be traced in
      the ring buffer when the given function(s) are called.
      
      format is:
      
       <function>:stacktrace[:<count>]
      
       echo 'schedule:stacktrace' > /debug/tracing/set_ftrace_filter
       cat /debug/tracing/trace_pipe
           kworker/2:0-4329  [002] ...2  2933.558007: <stack trace>
       => kthread
       => ret_from_fork
                <idle>-0     [000] .N.2  2933.558019: <stack trace>
       => rest_init
       => start_kernel
       => x86_64_start_reservations
       => x86_64_start_kernel
           kworker/2:0-4329  [002] ...2  2933.558109: <stack trace>
       => kthread
       => ret_from_fork
      [...]
      
      This can be set to only trace a specific amount of times:
      
       echo 'schedule:stacktrace:3' > /debug/tracing/set_ftrace_filter
       cat /debug/tracing/trace_pipe
                 <...>-58    [003] ...2   841.801694: <stack trace>
       => kthread
       => ret_from_fork
                <idle>-0     [001] .N.2   841.801697: <stack trace>
       => start_secondary
                 <...>-2059  [001] ...2   841.801736: <stack trace>
       => wait_for_common
       => wait_for_completion
       => flush_work
       => tty_flush_to_ldisc
       => input_available_p
       => n_tty_poll
       => tty_poll
       => do_select
       => core_sys_select
       => sys_select
       => system_call_fastpath
      
      To remove these:
      
       echo '!schedule:stacktrace' > /debug/tracing/set_ftrace_filter
       echo '!schedule:stacktrace:0' > /debug/tracing/set_ftrace_filter
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dd42cd3e
    • S
      ftrace: Separate unlimited probes from count limited probes · 8380d248
      Steven Rostedt (Red Hat) 提交于
      The function tracing probes that trigger traceon or traceoff can be
      set to unlimited, or given a count of # of times to execute.
      
      By separating these two types of probes, we can then use the dynamic
      ftrace function filtering directly, and remove the brute force
      "check if this function called is my probe" routines in ftrace.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8380d248
    • S
      tracing: Consolidate ftrace_trace_onoff_unreg() into callback · 8b8fa62c
      Steven Rostedt (Red Hat) 提交于
      The only thing ftrace_trace_onoff_unreg() does is to do a strcmp()
      against the cmd parameter to determine what op to unregister. But
      this compare is also done after the location that this function is
      called (and returns). By moving the check for '!' to unregister after
      the strcmp(), the callback function itself can just do the unregister
      and we can get rid of the helper function.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8b8fa62c
    • S
      tracing: Consolidate updating of count for traceon/off · 1c317143
      Steven Rostedt (Red Hat) 提交于
      Remove some duplicate code and replace it with a helper function.
      This makes the code a it cleaner.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1c317143
    • S
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) 提交于
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • S
      tracing: Replace the static global per_cpu arrays with allocated per_cpu · a7603ff4
      Steven Rostedt 提交于
      The global and max-tr currently use static per_cpu arrays for the CPU data
      descriptors. But in order to get new allocated trace_arrays, they need to
      be allocated per_cpu arrays. Instead of using the static arrays, switch
      the global and max-tr to use allocated data.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a7603ff4
  15. 24 1月, 2013 1 次提交
  16. 23 1月, 2013 1 次提交
    • S
      ftrace: Use only the preempt version of function tracing · 897f68a4
      Steven Rostedt 提交于
      The function tracer had two different versions of function tracing.
      
      The disabling of irqs version and the preempt disable version.
      
      As function tracing in very intrusive and can cause nasty recursion
      issues, it has its own recursion protection. But the old method to
      do this was a flat layer. If it detected that a recursion was happening
      then it would just return without recording.
      
      This made the preempt version (much faster than the irq disabling one)
      not very useful, because if an interrupt were to occur after the
      recursion flag was set, the interrupt would not be traced at all,
      because every function that was traced would think it recursed on
      itself (due to the context it preempted setting the recursive flag).
      
      Now that we have a recursion flag for every context level, we
      no longer need to worry about that. We can disable preemption,
      set the current context recursion check bit, and go on. If an
      interrupt were to come along, it would check its own context bit
      and happily continue to trace.
      
      As the preempt version is faster than the irq disable version,
      there's no more reason to keep the preempt version around.
      And the irq disable version still had an issue with missing
      out on tracing NMI code.
      
      Remove the irq disable function tracer version and have the
      preempt disable version be the default (and only version).
      
      Before this patch we had from running:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 12.028
      Time: 11.945
      Time: 11.925
      Time: 11.964
      Time: 12.002
      Time: 11.910
      Time: 11.944
      Time: 11.929
      Time: 11.941
      Time: 11.924
      
      (average: 11.9512)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
       a 13.8% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      897f68a4
  17. 06 12月, 2012 1 次提交
  18. 01 11月, 2012 2 次提交
  19. 07 9月, 2012 1 次提交
    • A
      pstore/ftrace: Convert to its own enable/disable debugfs knob · 65f8c95e
      Anton Vorontsov 提交于
      With this patch we no longer reuse function tracer infrastructure, now
      we register our own tracer back-end via a debugfs knob.
      
      It's a bit more code, but that is the only downside. On the bright side we
      have:
      
      - Ability to make persistent_ram module removable (when needed, we can
        move ftrace_ops struct into a module). Note that persistent_ram is still
        not removable for other reasons, but with this patch it's just one
        thing less to worry about;
      
      - Pstore part is more isolated from the generic function tracer. We tried
        it already by registering our own tracer in available_tracers, but that
        way we're loosing ability to see the traces while we record them to
        pstore. This solution is somewhere in the middle: we only register
        "internal ftracer" back-end, but not the "front-end";
      
      - When there is only pstore tracing enabled, the kernel will only write
        to the pstore buffer, omitting function tracer buffer (which, of course,
        still can be enabled via 'echo function > current_tracer').
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org>
      65f8c95e
  20. 31 7月, 2012 1 次提交
    • S
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt 提交于
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4740974a
  21. 20 7月, 2012 2 次提交
  22. 18 7月, 2012 2 次提交
  23. 07 7月, 2011 1 次提交
    • S
      ftrace: Fix regression of :mod:module function enabling · 43dd61c9
      Steven Rostedt 提交于
      The new code that allows different utilities to pick and choose
      what functions they trace broke the :mod: hook that allows users
      to trace only functions of a particular module.
      
      The reason is that the :mod: hook bypasses the hash that is setup
      to allow individual users to trace their own functions and uses
      the global hash directly. But if the global hash has not been
      set up, it will cause a bug:
      
      echo '*:mod:radeon' > /sys/kernel/debug/set_ftrace_filter
      
      produces:
      
       [drm:drm_mode_getfb] *ERROR* invalid framebuffer id
       [drm:radeon_crtc_page_flip] *ERROR* failed to reserve new rbo buffer before flip
       BUG: unable to handle kernel paging request at ffffffff8160ec90
       IP: [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
       PGD 1a05067 PUD 1a09063 PMD 80000000016001e1
       Oops: 0003 [#1] SMP Jul  7 04:02:28 phyllis kernel: [55303.858604] CPU 1
       Modules linked in: cryptd aes_x86_64 aes_generic binfmt_misc rfcomm bnep ip6table_filter hid radeon r8169 ahci libahci mii ttm drm_kms_helper drm video i2c_algo_bit intel_agp intel_gtt
      
       Pid: 10344, comm: bash Tainted: G        WC  3.0.0-rc5 #1 Dell Inc. Inspiron N5010/0YXXJJ
       RIP: 0010:[<ffffffff810d9136>]  [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
       RSP: 0018:ffff88003a96bda8  EFLAGS: 00010246
       RAX: ffff8801301735c0 RBX: ffffffff8160ec80 RCX: 0000000000306ee0
       RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880137c92940
       RBP: ffff88003a96bdb8 R08: ffff880137c95680 R09: 0000000000000000
       R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff81c9df78
       R13: ffff8801153d1000 R14: 0000000000000000 R15: 0000000000000000
       FS: 00007f329c18a700(0000) GS:ffff880137c80000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: ffffffff8160ec90 CR3: 000000003002b000 CR4: 00000000000006e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
       Process bash (pid: 10344, threadinfo ffff88003a96a000, task ffff88012fcfc470)
       Stack:
        0000000000000fd0 00000000000000fc ffff88003a96be38 ffffffff810d92f5
        ffff88011c4c4e00 ffff880000000000 000000000b69f4d0 ffffffff8160ec80
        ffff8800300e6f06 0000000081130295 0000000000000282 ffff8800300e6f00
       Call Trace:
        [<ffffffff810d92f5>] match_records+0x155/0x1b0
        [<ffffffff810d940c>] ftrace_mod_callback+0xbc/0x100
        [<ffffffff810dafdf>] ftrace_regex_write+0x16f/0x210
        [<ffffffff810db09f>] ftrace_filter_write+0xf/0x20
        [<ffffffff81166e48>] vfs_write+0xc8/0x190
        [<ffffffff81167001>] sys_write+0x51/0x90
        [<ffffffff815c7e02>] system_call_fastpath+0x16/0x1b
       Code: 48 8b 33 31 d2 48 85 f6 75 33 49 89 d4 4c 03 63 08 49 8b 14 24 48 85 d2 48 89 10 74 04 48 89 42 08 49 89 04 24 4c 89 60 08 31 d2
       RIP [<ffffffff810d9136>] add_hash_entry+0x66/0xd0
        RSP <ffff88003a96bda8>
       CR2: ffffffff8160ec90
       ---[ end trace a5d031828efdd88e ]---
      Reported-by: NBrian Marete <marete@toshnix.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43dd61c9
  24. 19 5月, 2011 1 次提交
    • S
      ftrace: Implement separate user function filtering · b848914c
      Steven Rostedt 提交于
      ftrace_ops that are registered to trace functions can now be
      agnostic to each other in respect to what functions they trace.
      Each ops has their own hash of the functions they want to trace
      and a hash to what they do not want to trace. A empty hash for
      the functions they want to trace denotes all functions should
      be traced that are not in the notrace hash.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b848914c