1. 02 6月, 2009 2 次提交
    • S
      tracing: combine the default tracers into one config · 897f17a6
      Steven Rostedt 提交于
      Both event tracer and sched switch plugin are selected by default
      by all generic tracers. But if no generic tracer is enabled, their options
      appear. But ether one of them will select the other, thus it only
      makes sense to have the default tracers be selected by one option.
      
      [ Impact: clean up kconfig menu ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      897f17a6
    • S
      tracing: fix config options to not show when automatically selected · 5e0a0939
      Steven Rostedt 提交于
      There are two options that are selected by all tracers, but we want
      to have those options available when no tracer is selected. These are
      
       The event tracer and sched switch tracer.
      
      The are enabled by all tracers, but if a tracer is not selected we want
      the options to appear. All tracers including them select TRACING.
      Thus what we would like to do is:
      
        config EVENT_TRACER
      	bool "prompt"
      	depends on TRACING
      	select TRACING
      
      But that gives us a bug in the kbuild system since we just created a
      circular dependency. We only want the prompt to show when TRACING is off.
      
      This patch adds GENERIC_TRACER that all tracers will select instead of
      TRACING. The two options (sched switch and event tracer) will select
      TRACING directly and depend on !GENERIC_TRACER. This solves the cicular
      dependency.
      
      [ Impact: hide options that are selected by default ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5e0a0939
  2. 26 5月, 2009 1 次提交
    • Z
      ftrace: Add task_comm support for trace_event · b11c53e1
      Zhaolei 提交于
      If we enable a trace event alone without any tracer running (such as
      function tracer, sched switch tracer, etc...) it can't output enough
      task command information.
      
      We need to use the tracing_{start/stop}_cmdline_record() helpers
      which are designed to keep track of cmdlines for any tasks that
      were scheduled during the tracing.
      
      Before this patch:
       # echo 1 > debugfs/tracing/events/sched/sched_switch/enable
       # cat debugfs/tracing/trace
       # tracer: nop
       #
       #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
       #              | |       |          |         |
                  <...>-2289  [000] 526276.724790: sched_switch: task bash:2289 [120] ==> sshd:2287 [120]
                  <...>-2287  [000] 526276.725231: sched_switch: task sshd:2287 [120] ==> bash:2289 [120]
                  <...>-2289  [000] 526276.725452: sched_switch: task bash:2289 [120] ==> sshd:2287 [120]
                  <...>-2287  [000] 526276.727181: sched_switch: task sshd:2287 [120] ==> swapper:0 [140]
                 <idle>-0     [000] 526277.032734: sched_switch: task swapper:0 [140] ==> events/0:5 [115]
                  <...>-5     [000] 526277.032782: sched_switch: task events/0:5 [115] ==> swapper:0 [140]
       ...
      
      After this patch:
       # tracer: nop
       #
       #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
       #              | |       |          |         |
                   bash-2269  [000] 527347.989229: sched_switch: task bash:2269 [120] ==> sshd:2267 [120]
                   sshd-2267  [000] 527347.990960: sched_switch: task sshd:2267 [120] ==> bash:2269 [120]
                   bash-2269  [000] 527347.991143: sched_switch: task bash:2269 [120] ==> sshd:2267 [120]
                   sshd-2267  [000] 527347.992959: sched_switch: task sshd:2267 [120] ==> swapper:0 [140]
                 <idle>-0     [000] 527348.531989: sched_switch: task swapper:0 [140] ==> events/0:5 [115]
               events/0-5     [000] 527348.532115: sched_switch: task events/0:5 [115] ==> swapper:0 [140]
       ...
      
      Changelog:
      v1->v2: Update Kconfig to select CONTEXT_SWITCH_TRACER in
              ENABLE_EVENT_TRACING
      v2->v3: v2 can solve problem that was caused by config EVENT_TRACING
              alone, but when CONFIG_FTRACE is off and CONFIG_TRACING is
              selected by other config, compile fail happened again.
              This version solves it.
      
      [ Impact: fix incomplete output of event tracing ]
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <4A14FDFE.2080402@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      b11c53e1
  3. 08 5月, 2009 1 次提交
  4. 06 5月, 2009 1 次提交
    • S
      ring-buffer: add benchmark and tester · 5092dbc9
      Steven Rostedt 提交于
      This patch adds code that can benchmark the ring buffer as well as
      test it. This code can be compiled into the kernel (not recommended)
      or as a module.
      
      A separate ring buffer is used to not interfer with other users, like
      ftrace. It creates a producer and a consumer (option to disable creation
      of the consumer) and will run for 10 seconds, then sleep for 10 seconds
      and then repeat.
      
      While running, the producer will write 10 byte loads into the ring
      buffer with just putting in the current CPU number. The reader will
      continually try to read the buffer. The reader will alternate from reading
      the buffer via event by event, or by full pages.
      
      The output is a pr_info, thus it will fill up the syslogs.
      
        Starting ring buffer hammer
        End ring buffer hammer
        Time:     9000349 (usecs)
        Overruns: 12578640
        Read:     5358440  (by events)
        Entries:  0
        Total:    17937080
        Missed:   0
        Hit:      17937080
        Entries per millisec: 1993
        501 ns per entry
        Sleeping for 10 secs
        Starting ring buffer hammer
        End ring buffer hammer
        Time:     9936350 (usecs)
        Overruns: 0
        Read:     28146644  (by pages)
        Entries:  74
        Total:    28146718
        Missed:   0
        Hit:      28146718
        Entries per millisec: 2832
        353 ns per entry
        Sleeping for 10 secs
      
      Time:      is the time the test ran
      Overruns:  the number of events that were overwritten and not read
      Read:      the number of events read (either by pages or events)
      Entries:   the number of entries left in the buffer
                       (the by pages will only read full pages)
      Total:     Entries + Read + Overruns
      Missed:    the number of entries that failed to write
      Hit:       the number of entries that were written
      
      The above example shows that it takes ~353 nanosecs per entry when
      there is a reader, reading by pages (and no overruns)
      
      The event by event reader slowed the producer down to 501 nanosecs.
      
      [ Impact: see how changes to the ring buffer affect stability and performance ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5092dbc9
  5. 20 4月, 2009 3 次提交
  6. 14 4月, 2009 1 次提交
    • T
      tracing/infrastructure: separate event tracer from event support · 5f77a88b
      Tom Zanussi 提交于
      Add a new config option, CONFIG_EVENT_TRACING that gets selected
      when CONFIG_TRACING is selected and adds everything needed by the stuff
      in trace_export - basically all the event tracing support needed by e.g.
      bprint, minus the actual events, which are only included if
      CONFIG_EVENT_TRACER is selected.
      
      So CONFIG_EVENT_TRACER can be used to turn on or off the generated events
      (what I think of as the 'event tracer'), while CONFIG_EVENT_TRACING turns
      on or off the base event tracing support used by both the event tracer and
      the other things such as bprint that can't be configured out.
      Signed-off-by: NTom Zanussi <tzanussi@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: fweisbec@gmail.com
      LKML-Reference: <1239178441.10295.34.camel@tropicana>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5f77a88b
  7. 10 4月, 2009 1 次提交
    • L
      tracing: fix document references · 4d1f4372
      Li Zefan 提交于
      When moving documents to Documentation/trace/, I forgot to
      grep Kconfig to find out those references.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Pekka Paalanen <pq@iki.fi>
      Cc: eduard.munteanu@linux360.ro
      LKML-Reference: <49DE97EF.7080208@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4d1f4372
  8. 30 3月, 2009 1 次提交
  9. 25 3月, 2009 2 次提交
    • S
      tracing: move function profiler data out of function struct · 493762fc
      Steven Rostedt 提交于
      Impact: reduce size of memory in function profiler
      
      The function profiler originally introduces its counters into the
      function records itself. There is 20 thousand different functions on
      a normal system, and that is adding 20 thousand counters for profiling
      event when not needed.
      
      A normal run of the profiler yields only a couple of thousand functions
      executed, depending on what is being profiled. This means we have around
      18 thousand useless counters.
      
      This patch rectifies this by moving the data out of the function
      records used by dynamic ftrace. Data is preallocated to hold the functions
      when the profiling begins. Checks are made during profiling to see if
      more recorcds should be allocated, and they are allocated if it is safe
      to do so.
      
      This also removes the dependency from using dynamic ftrace, and also
      removes the overhead by having it enabled.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      493762fc
    • S
      tracing: add function profiler · bac429f0
      Steven Rostedt 提交于
      Impact: new profiling feature
      
      This patch adds a function profiler. In debugfs/tracing/ two new
      files are created.
      
        function_profile_enabled  - to enable or disable profiling
      
        trace_stat/functions   - the profiled functions.
      
      For example:
      
        echo 1 > /debugfs/tracing/function_profile_enabled
        ./hackbench 50
        echo 0 > /debugfs/tracing/function_profile_enabled
      
      yields:
      
        cat /debugfs/tracing/trace_stat/functions
      
        Function                               Hit
        --------                               ---
        _spin_lock                        10106442
        _spin_unlock                      10097492
        kfree                              6013704
        _spin_unlock_irqrestore            4423941
        _spin_lock_irqsave                 4406825
        __phys_addr                        4181686
        __slab_free                        4038222
        dput                               4030130
        path_put                           4023387
        unroll_tree_refs                   4019532
      [...]
      
      The most hit functions are listed first. Functions that are not
      hit are not listed.
      
      This feature depends on and uses dynamic function tracing. When the
      function profiling is disabled, no overhead occurs. But it still
      takes up around 300KB to hold the data, thus it is not recomended
      to keep it enabled for systems low on memory.
      
      When a '1' is echoed into the function_profile_enabled file, the
      counters for is function is reset back to zero. Thus you can see what
      functions are hit most by different programs.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      bac429f0
  10. 24 3月, 2009 1 次提交
    • A
      tracing: Fix TRACING_SUPPORT dependency for PPC32 · 45b95608
      Anton Vorontsov 提交于
      commit 40ada30f ("tracing: clean up menu"),
      despite the "clean up" in its purpose, introduced a behavioural
      change for Kconfig symbols: we no longer able to select tracing
      support on PPC32 (because IRQFLAGS_SUPPORT isn't yet implemented).
      
      The IRQFLAGS_SUPPORT is not mandatory for most tracers, tracing core
      has a special case for platforms w/o irqflags (which, by the way, has
      become useless as of the commit above).
      
      Though according to Ingo Molnar, there was periodic build failures on
      weird, unmaintained architectures that had no irqflags-tracing support
      and hence didn't know the raw_irqs_save/restore primitives. Thus we'd
      better not enable irqflags-less tracing for all architectures.
      
      This patch restores the old behaviour for PPC32, and thus brings the
      tracing back. Other architectures can either add themselves to the
      exception list or (better) implement TRACE_IRQFLAGS_SUPPORT.
      Signed-off-by: NAnton Vorontsov <avorontsov@ru.mvista.com>
      Acked-b: Steven Rostedt <rostedt@goodmis.org>
      Cc: linuxppc-dev@ozlabs.org
      LKML-Reference: <20090323220724.GA9851@oksana.dev.rtsoft.ru>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      45b95608
  11. 16 3月, 2009 1 次提交
  12. 13 3月, 2009 1 次提交
  13. 07 3月, 2009 2 次提交
    • F
      tracing/core: drop the old trace_printk() implementation in favour of trace_bprintk() · 769b0441
      Frederic Weisbecker 提交于
      Impact: faster and lighter tracing
      
      Now that we have trace_bprintk() which is faster and consume lesser
      memory than trace_printk() and has the same purpose, we can now drop
      the old implementation in favour of the binary one from trace_bprintk(),
      which means we move all the implementation of trace_bprintk() to
      trace_printk(), so the Api doesn't change except that we must now use
      trace_seq_bprintk() to print the TRACE_PRINT entries.
      
      Some changes result of this:
      
      - Previously, trace_bprintk depended of a single tracer and couldn't
        work without. This tracer has been dropped and the whole implementation
        of trace_printk() (like the module formats management) is now integrated
        in the tracing core (comes with CONFIG_TRACING), though we keep the file
        trace_printk (previously trace_bprintk.c) where we can find the module
        management. Thus we don't overflow trace.c
      
      - changes some parts to use trace_seq_bprintk() to print TRACE_PRINT entries.
      
      - change a bit trace_printk/trace_vprintk macros to support non-builtin formats
        constants, and fix 'const' qualifiers warnings. But this is all transparent for
        developers.
      
      - etc...
      
      V2:
      
      - Rebase against last changes
      - Fix mispell on the changelog
      
      V3:
      
      - Rebase against last changes (moving trace_printk() to kernel.h)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1236356510-8381-5-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      769b0441
    • L
      tracing: infrastructure for supporting binary record · 1427cdf0
      Lai Jiangshan 提交于
      Impact: save on memory for tracing
      
      Current tracers are typically using a struct(like struct ftrace_entry,
      struct ctx_switch_entry, struct special_entr etc...)to record a binary
      event. These structs can only record a their own kind of events.
      A new kind of tracer need a new struct and a lot of code too handle it.
      
      So we need a generic binary record for events. This infrastructure
      is for this purpose.
      
      [fweisbec@gmail.com: rebase against latest -tip, make it safe while sched
      tracing as reported by Steven Rostedt]
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1236356510-8381-3-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1427cdf0
  14. 06 3月, 2009 2 次提交
  15. 25 2月, 2009 1 次提交
    • S
      tracing: add event trace infrastructure · b77e38aa
      Steven Rostedt 提交于
      This patch creates the event tracing infrastructure of ftrace.
      It will create the files:
      
       /debug/tracing/available_events
       /debug/tracing/set_event
      
      The available_events will list the trace points that have been
      registered with the event tracer.
      
      set_events will allow the user to enable or disable an event hook.
      
      example:
      
       # echo sched_wakeup > /debug/tracing/set_event
      
      Will enable the sched_wakeup event (if it is registered).
      
       # echo "!sched_wakeup" >> /debug/tracing/set_event
      
      Will disable the sched_wakeup event (and only that event).
      
       # echo > /debug/tracing/set_event
      
      Will disable all events (notice the '>')
      
       # cat /debug/tracing/available_events > /debug/tracing/set_event
      
      Will enable all registered event hooks.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      b77e38aa
  16. 19 2月, 2009 1 次提交
    • S
      tracing: have function trace select kallsyms · 4d7a077c
      Steven Rostedt 提交于
      Impact: fix output of function tracer to be useful
      
      The function tracer is pretty useless if KALLSYMS is not configured.
      Unless you are good at reading hex values, the function tracer should
      select the KALLSYMS configuration.
      
      Also, the dynamic function tracer will fail its self test if KALLSYMS
      is not selected.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      4d7a077c
  17. 16 2月, 2009 1 次提交
    • P
      trace: mmiotrace to the tracer menu in Kconfig · 6bc5c366
      Pekka Paalanen 提交于
      Impact: cosmetic change in Kconfig menu layout
      
      This patch was originally suggested by Peter Zijlstra, but seems it
      was forgotten.
      
      CONFIG_MMIOTRACE and CONFIG_MMIOTRACE_TEST were selectable
      directly under the Kernel hacking / debugging menu in the kernel
      configuration system. They were present only for x86 and x86_64.
      
      Other tracers that use the ftrace tracing framework are in their own
      sub-menu. This patch moves the mmiotrace configuration options there.
      Since the Kconfig file, where the tracer menu is, is not architecture
      specific, HAVE_MMIOTRACE_SUPPORT is introduced and provided only by
      x86/x86_64. CONFIG_MMIOTRACE now depends on it.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6bc5c366
  18. 11 2月, 2009 1 次提交
  19. 09 2月, 2009 2 次提交
  20. 08 2月, 2009 1 次提交
    • S
      ring-buffer: add NMI protection for spinlocks · 78d904b4
      Steven Rostedt 提交于
      Impact: prevent deadlock in NMI
      
      The ring buffers are not yet totally lockless with writing to
      the buffer. When a writer crosses a page, it grabs a per cpu spinlock
      to protect against a reader. The spinlocks taken by a writer are not
      to protect against other writers, since a writer can only write to
      its own per cpu buffer. The spinlocks protect against readers that
      can touch any cpu buffer. The writers are made to be reentrant
      with the spinlocks disabling interrupts.
      
      The problem arises when an NMI writes to the buffer, and that write
      crosses a page boundary. If it grabs a spinlock, it can be racing
      with another writer (since disabling interrupts does not protect
      against NMIs) or with a reader on the same CPU. Luckily, most of the
      users are not reentrant and protects against this issue. But if a
      user of the ring buffer becomes reentrant (which is what the ring
      buffers do allow), if the NMI also writes to the ring buffer then
      we risk the chance of a deadlock.
      
      This patch moves the ftrace_nmi_enter called by nmi_enter() to the
      ring buffer code. It replaces the current ftrace_nmi_enter that is
      used by arch specific code to arch_ftrace_nmi_enter and updates
      the Kconfig to handle it.
      
      When an NMI is called, it will set a per cpu variable in the ring buffer
      code and will clear it when the NMI exits. If a write to the ring buffer
      crosses page boundaries inside an NMI, a trylock is used on the spin
      lock instead. If the spinlock fails to be acquired, then the entry
      is discarded.
      
      This bug appeared in the ftrace work in the RT tree, where event tracing
      is reentrant. This workaround solved the deadlocks that appeared there.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      78d904b4
  21. 03 2月, 2009 1 次提交
  22. 14 1月, 2009 1 次提交
    • F
      tracing: add a new workqueue tracer · e1d8aa9f
      Frederic Weisbecker 提交于
      Impact: new tracer
      
      The workqueue tracer provides some statistical informations
      about each cpu workqueue thread such as the number of the
      works inserted and executed since their creation. It can help
      to evaluate the amount of work each of them have to perform.
      For example it can help a developer to decide whether he should
      choose a per cpu workqueue instead of a singlethreaded one.
      
      It only traces statistical informations for now but it will probably later
      provide event tracing too.
      
      Such a tracer could help too, and be improved, to help rt priority sorted
      workqueue development.
      
      To have a snapshot of the workqueues state at any time, just do
      
      cat /debugfs/tracing/trace_stat/workqueues
      
      Ie:
      
        1    125        125       reiserfs/1
        1      0          0       scsi_tgtd/1
        1      0          0       aio/1
        1      0          0       ata/1
        1    114        114       kblockd/1
        1      0          0       kintegrityd/1
        1   2147       2147       events/1
      
        0      0          0       kpsmoused
        0    105        105       reiserfs/0
        0      0          0       scsi_tgtd/0
        0      0          0       aio/0
        0      0          0       ata_aux
        0      0          0       ata/0
        0      0          0       cqueue
        0      0          0       kacpi_notify
        0      0          0       kacpid
        0    149        149       kblockd/0
        0      0          0       kintegrityd/0
        0   1000       1000       khelper
        0   2270       2270       events/0
      
      Changes in V2:
      
      _ Drop the static array based on NR_CPU and dynamically allocate the stat array
        with num_possible_cpus() and other cpu mask facilities....
      _ Trace workqueue insertion at a bit lower level (insert_work instead of queue_work) to handle
        even the workqueue barriers.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1d8aa9f
  23. 11 1月, 2009 1 次提交
    • P
      trace: mmiotrace to the tracer menu in Kconfig · fe6f90e5
      Pekka Paalanen 提交于
      Impact: cosmetic change in Kconfig menu layout
      
      This patch was originally suggested by Peter Zijlstra, but seems it
      was forgotten.
      
      CONFIG_MMIOTRACE and CONFIG_MMIOTRACE_TEST were selectable
      directly under the Kernel hacking / debugging menu in the kernel
      configuration system. They were present only for x86 and x86_64.
      
      Other tracers that use the ftrace tracing framework are in their own
      sub-menu. This patch moves the mmiotrace configuration options there.
      Since the Kconfig file, where the tracer menu is, is not architecture
      specific, HAVE_MMIOTRACE_SUPPORT is introduced and provided only by
      x86/x86_64. CONFIG_MMIOTRACE now depends on it.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fe6f90e5
  24. 06 1月, 2009 1 次提交
  25. 30 12月, 2008 2 次提交
    • I
      tracing/kmemtrace: export kmemtrace_mark_alloc_node() / kmemtrace_mark_free() · 3fd4bc01
      Ingo Molnar 提交于
      Impact: build fix
      
      Also fix up Kconfig dependencies and include files.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3fd4bc01
    • F
      tracing/kmemtrace: normalize the raw tracer event to the unified tracing API · 36994e58
      Frederic Weisbecker 提交于
      Impact: new tracer plugin
      
      This patch adapts kmemtrace raw events tracing to the unified tracing API.
      
      To enable and use this tracer, just do the following:
      
       echo kmemtrace > /debugfs/tracing/current_tracer
       cat /debugfs/tracing/trace
      
      You will have the following output:
      
       # tracer: kmemtrace
       #
       #
       # ALLOC  TYPE  REQ   GIVEN  FLAGS           POINTER         NODE    CALLER
       # FREE   |      |     |       |              |   |            |        |
       # |
      
      type_id 1 call_site 18446744071565527833 ptr 18446612134395152256
      type_id 0 call_site 18446744071565585597 ptr 18446612134405955584 bytes_req 4096 bytes_alloc 4096 gfp_flags 208 node -1
      type_id 1 call_site 18446744071565585534 ptr 18446612134405955584
      type_id 0 call_site 18446744071565585597 ptr 18446612134405955584 bytes_req 4096 bytes_alloc 4096 gfp_flags 208 node -1
      type_id 0 call_site 18446744071565636711 ptr 18446612134345164672 bytes_req 240 bytes_alloc 240 gfp_flags 208 node -1
      type_id 1 call_site 18446744071565585534 ptr 18446612134405955584
      type_id 0 call_site 18446744071565585597 ptr 18446612134405955584 bytes_req 4096 bytes_alloc 4096 gfp_flags 208 node -1
      type_id 0 call_site 18446744071565636711 ptr 18446612134345164912 bytes_req 240 bytes_alloc 240 gfp_flags 208 node -1
      type_id 1 call_site 18446744071565585534 ptr 18446612134405955584
      type_id 0 call_site 18446744071565585597 ptr 18446612134405955584 bytes_req 4096 bytes_alloc 4096 gfp_flags 208 node -1
      type_id 0 call_site 18446744071565636711 ptr 18446612134345165152 bytes_req 240 bytes_alloc 240 gfp_flags 208 node -1
      type_id 0 call_site 18446744071566144042 ptr 18446612134346191680 bytes_req 1304 bytes_alloc 1312 gfp_flags 208 node -1
      type_id 1 call_site 18446744071565585534 ptr 18446612134405955584
      type_id 0 call_site 18446744071565585597 ptr 18446612134405955584 bytes_req 4096 bytes_alloc 4096 gfp_flags 208 node -1
      type_id 1 call_site 18446744071565585534 ptr 18446612134405955584
      
      That was to stay backward compatible with the format output produced in
      inux/tracepoint.h.
      
      This is the default ouput, but note that I tried something else.
      
      If you change an option:
      
      echo kmem_minimalistic > /debugfs/trace_options
      
      and then cat /debugfs/trace, you will have the following output:
      
       # tracer: kmemtrace
       #
       #
       # ALLOC  TYPE  REQ   GIVEN  FLAGS           POINTER         NODE    CALLER
       # FREE   |      |     |       |              |   |            |        |
       # |
      
         -      C                            0xffff88007c088780          file_free_rcu
         +      K   4096   4096   000000d0   0xffff88007cad6000     -1   getname
         -      C                            0xffff88007cad6000          putname
         +      K   4096   4096   000000d0   0xffff88007cad6000     -1   getname
         +      K    240    240   000000d0   0xffff8800790dc780     -1   d_alloc
         -      C                            0xffff88007cad6000          putname
         +      K   4096   4096   000000d0   0xffff88007cad6000     -1   getname
         +      K    240    240   000000d0   0xffff8800790dc870     -1   d_alloc
         -      C                            0xffff88007cad6000          putname
         +      K   4096   4096   000000d0   0xffff88007cad6000     -1   getname
         +      K    240    240   000000d0   0xffff8800790dc960     -1   d_alloc
         +      K   1304   1312   000000d0   0xffff8800791d7340     -1   reiserfs_alloc_inode
         -      C                            0xffff88007cad6000          putname
         +      K   4096   4096   000000d0   0xffff88007cad6000     -1   getname
         -      C                            0xffff88007cad6000          putname
         +      K    992   1000   000000d0   0xffff880079045b58     -1   alloc_inode
         +      K    768   1024   000080d0   0xffff88007c096400     -1   alloc_pipe_info
         +      K    240    240   000000d0   0xffff8800790dca50     -1   d_alloc
         +      K    272    320   000080d0   0xffff88007c088780     -1   get_empty_filp
         +      K    272    320   000080d0   0xffff88007c088000     -1   get_empty_filp
      
      Yeah I shall confess kmem_minimalistic should be: kmem_alternative.
      
      Whatever, I find it more readable but this a personal opinion of course.
      We can drop it if you want.
      
      On the ALLOC/FREE column, + means an allocation and - a free.
      
      On the type column, you have K = kmalloc, C = cache, P = page
      
      I would like the flags to be GFP_* strings but that would not be easy to not
      break the column with strings....
      
      About the node...it seems to always be -1. I don't know why but that shouldn't
      be difficult to find.
      
      I moved linux/tracepoint.h to trace/tracepoint.h as well. I think that would
      be more easy to find the tracer headers if they are all in their common
      directory.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      36994e58
  26. 18 12月, 2008 1 次提交
    • S
      trace: add a way to enable or disable the stack tracer · f38f1d2a
      Steven Rostedt 提交于
      Impact: enhancement to stack tracer
      
      The stack tracer currently is either on when configured in or
      off when it is not. It can not be disabled when it is configured on.
      (besides disabling the function tracer that it uses)
      
      This patch adds a way to enable or disable the stack tracer at
      run time. It defaults off on bootup, but a kernel parameter 'stacktrace'
      has been added to enable it on bootup.
      
      A new sysctl has been added "kernel.stack_tracer_enabled" to let
      the user enable or disable the stack tracer at run time.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f38f1d2a
  27. 12 12月, 2008 1 次提交
  28. 03 12月, 2008 1 次提交
  29. 26 11月, 2008 3 次提交
  30. 23 11月, 2008 1 次提交