1. 17 7月, 2014 1 次提交
  2. 15 5月, 2014 1 次提交
    • S
      tracing: Add __bitmask() macro to trace events to cpumasks and other bitmasks · 4449bf92
      Steven Rostedt (Red Hat) 提交于
      Being able to show a cpumask of events can be useful as some events
      may affect only some CPUs. There is no standard way to record the
      cpumask and converting it to a string is rather expensive during
      the trace as traces happen in hotpaths. It would be better to record
      the raw event mask and be able to parse it at print time.
      
      The following macros were added for use with the TRACE_EVENT() macro:
      
        __bitmask()
        __assign_bitmask()
        __get_bitmask()
      
      To test this, I added this to the sched_migrate_task event, which
      looked like this:
      
      TRACE_EVENT(sched_migrate_task,
      
      	TP_PROTO(struct task_struct *p, int dest_cpu, const struct cpumask *cpus),
      
      	TP_ARGS(p, dest_cpu, cpus),
      
      	TP_STRUCT__entry(
      		__array(	char,	comm,	TASK_COMM_LEN	)
      		__field(	pid_t,	pid			)
      		__field(	int,	prio			)
      		__field(	int,	orig_cpu		)
      		__field(	int,	dest_cpu		)
      		__bitmask(	cpumask, num_possible_cpus()	)
      	),
      
      	TP_fast_assign(
      		memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
      		__entry->pid		= p->pid;
      		__entry->prio		= p->prio;
      		__entry->orig_cpu	= task_cpu(p);
      		__entry->dest_cpu	= dest_cpu;
      		__assign_bitmask(cpumask, cpumask_bits(cpus), num_possible_cpus());
      	),
      
      	TP_printk("comm=%s pid=%d prio=%d orig_cpu=%d dest_cpu=%d cpumask=%s",
      		  __entry->comm, __entry->pid, __entry->prio,
      		  __entry->orig_cpu, __entry->dest_cpu,
      		  __get_bitmask(cpumask))
      );
      
      With the output of:
      
              ksmtuned-3613  [003] d..2   485.220508: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=3 dest_cpu=2 cpumask=00000000,0000000f
           migration/1-13    [001] d..5   485.221202: sched_migrate_task: comm=ksmtuned pid=3614 prio=120 orig_cpu=1 dest_cpu=0 cpumask=00000000,0000000f
                   awk-3615  [002] d.H5   485.221747: sched_migrate_task: comm=rcu_preempt pid=7 prio=120 orig_cpu=0 dest_cpu=1 cpumask=00000000,000000ff
           migration/2-18    [002] d..5   485.222062: sched_migrate_task: comm=ksmtuned pid=3615 prio=120 orig_cpu=2 dest_cpu=3 cpumask=00000000,0000000f
      
      Link: http://lkml.kernel.org/r/1399377998-14870-6-git-send-email-javi.merino@arm.com
      Link: http://lkml.kernel.org/r/20140506132238.22e136d1@gandalf.local.homeSuggested-by: NJavi Merino <javi.merino@arm.com>
      Tested-by: NJavi Merino <javi.merino@arm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4449bf92
  3. 09 4月, 2014 1 次提交
  4. 22 3月, 2014 1 次提交
  5. 21 3月, 2014 1 次提交
    • V
      tracing: Fix array size mismatch in format string · 87291347
      Vaibhav Nagarnaik 提交于
      In event format strings, the array size is reported in two locations.
      One in array subscript and then via the "size:" attribute. The values
      reported there have a mismatch.
      
      For e.g., in sched:sched_switch the prev_comm and next_comm character
      arrays have subscript values as [32] where as the actual field size is
      16.
      
      name: sched_switch
      ID: 301
      format:
              field:unsigned short common_type;       offset:0;       size:2; signed:0;
              field:unsigned char common_flags;       offset:2;       size:1; signed:0;
              field:unsigned char common_preempt_count;       offset:3;       size:1;signed:0;
              field:int common_pid;   offset:4;       size:4; signed:1;
      
              field:char prev_comm[32];       offset:8;       size:16;        signed:1;
              field:pid_t prev_pid;   offset:24;      size:4; signed:1;
              field:int prev_prio;    offset:28;      size:4; signed:1;
              field:long prev_state;  offset:32;      size:8; signed:1;
              field:char next_comm[32];       offset:40;      size:16;        signed:1;
              field:pid_t next_pid;   offset:56;      size:4; signed:1;
              field:int next_prio;    offset:60;      size:4; signed:1;
      
      After bisection, the following commit was blamed:
      92edca07 tracing: Use direct field, type and system names
      
      This commit removes the duplication of strings for field->name and
      field->type assuming that all the strings passed in
      __trace_define_field() are immutable. This is not true for arrays, where
      the type string is created in event_storage variable and field->type for
      all array fields points to event_storage.
      
      Use __stringify() to create a string constant for the type string.
      
      Also, get rid of event_storage and event_storage_mutex that are not
      needed anymore.
      
      also, an added benefit is that this reduces the overhead of events a bit more:
      
         text    data     bss     dec     hex filename
      8424787 2036472 1302528 11763787         b3804b vmlinux
      8420814 2036408 1302528 11759750         b37086 vmlinux.patched
      
      Link: http://lkml.kernel.org/r/1392349908-29685-1-git-send-email-vnagarnaik@google.com
      
      Cc: Laurent Chavey <chavey@google.com>
      Cc: stable@vger.kernel.org # 3.10+
      Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87291347
  6. 07 3月, 2014 3 次提交
    • S
      tracing: Use helper functions in event assignment to shrink macro size · 3fd40d1e
      Steven Rostedt 提交于
      The functions that assign the contents for the ftrace events are
      defined by the TRACE_EVENT() macros. Each event has its own unique
      way to assign data to its buffer. When you have over 500 events,
      that means there's 500 functions assigning data uniquely for each
      event (not really that many, as DECLARE_EVENT_CLASS() and multiple
      DEFINE_EVENT()s will only need a single function).
      
      By making helper functions in the core kernel to do some of the work
      instead, we can shrink the size of the kernel down a bit.
      
      With a kernel configured with 502 events, the change in size was:
      
         text    data     bss     dec     hex filename
      12987390        1913504 9785344 24686238        178ae9e /tmp/vmlinux
      12959102        1913504 9785344 24657950        178401e /tmp/vmlinux.patched
      
      That's a total of 28288 bytes, which comes down to 56 bytes per event.
      
      Link: http://lkml.kernel.org/r/20120810034708.370808175@goodmis.orgSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3fd40d1e
    • S
      tracing: Move event storage for array from macro to standalone function · 35bb4399
      Steven Rostedt 提交于
      The code that shows array fields for events is defined for all events.
      This can add up quite a bit when you have over 500 events.
      
      By making helper functions in the core kernel to do the work
      instead, we can shrink the size of the kernel down a bit.
      
      With a kernel configured with 502 events, the change in size was:
      
         text    data     bss     dec     hex filename
      12990946        1913568 9785344 24689858        178bcc2 /tmp/vmlinux
      12987390        1913504 9785344 24686238        178ae9e /tmp/vmlinux.patched
      
      That's a total of 3556 bytes, which comes down to 7 bytes per event.
      Although it's not much, this code is just called at initialization of
      the events.
      
      Link: http://lkml.kernel.org/r/20120810034708.084036335@goodmis.orgSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      35bb4399
    • S
      tracing: Move raw output code from macro to standalone function · 1d6bae96
      Steven Rostedt 提交于
      The code for trace events to format the raw recorded event data
      into human readable format in the 'trace' file is repeated for every
      event in the system. When you have over 500 events, this can add up
      quite a bit.
      
      By making helper functions in the core kernel to do the work
      instead, we can shrink the size of the kernel down a bit.
      
      With a kernel configured with 502 events, the change in size was:
      
         text    data     bss     dec     hex filename
      12991007        1913568 9785344 24689919        178bcff /tmp/vmlinux.orig
      12990946        1913568 9785344 24689858        178bcc2 /tmp/vmlinux.patched
      
      Note, this version does not save as much as the version of this patch
      I had a few years ago. That is because in the mean time, commit
      f71130de ("tracing: Add a helper function for event print functions")
      did a lot of the work my original patch did. But this change helps
      slightly, and is part of a larger clean up to reduce the size much further.
      
      Link: http://lkml.kernel.org/r/20120810034707.378538034@goodmis.org
      
      Cc: Li Zefan <lizefan@huawei.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1d6bae96
  7. 10 1月, 2014 1 次提交
  8. 22 12月, 2013 4 次提交
    • T
      tracing: Add and use generic set_trigger_filter() implementation · bac5fb97
      Tom Zanussi 提交于
      Add a generic event_command.set_trigger_filter() op implementation and
      have the current set of trigger commands use it - this essentially
      gives them all support for filters.
      
      Syntactically, filters are supported by adding 'if <filter>' just
      after the command, in which case only events matching the filter will
      invoke the trigger.  For example, to add a filter to an
      enable/disable_event command:
      
          echo 'enable_event:system:event if common_pid == 999' > \
                    .../othersys/otherevent/trigger
      
      The above command will only enable the system:event event if the
      common_pid field in the othersys:otherevent event is 999.
      
      As another example, to add a filter to a stacktrace command:
      
          echo 'stacktrace if common_pid == 999' > \
                         .../somesys/someevent/trigger
      
      The above command will only trigger a stacktrace if the common_pid
      field in the event is 999.
      
      The filter syntax is the same as that described in the 'Event
      filtering' section of Documentation/trace/events.txt.
      
      Because triggers can now use filters, the trigger-invoking logic needs
      to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
      trigger has a filter associated with it, the trigger invocation now
      needs to happen after the { assign; } part of the call, in order for
      the trigger condition to be tested.
      
      There's still a SOFT_DISABLED-only check at the top of e.g. the
      ftrace_raw_events function, so when an event is soft disabled but not
      because of the presence of a trigger, the original SOFT_DISABLED
      behavior remains unchanged.
      
      There's also a bit of trickiness in that some triggers need to avoid
      being invoked while an event is currently in the process of being
      logged, since the trigger may itself log data into the trace buffer.
      Thus we make sure the current event is committed before invoking those
      triggers.  To do that, we split the trigger invocation in two - the
      first part (event_triggers_call()) checks the filter using the current
      trace record; if a command has the post_trigger flag set, it sets a
      bit for itself in the return value, otherwise it directly invoks the
      trigger.  Once all commands have been either invoked or set their
      return flag, event_triggers_call() returns.  The current record is
      then either committed or discarded; if any commands have deferred
      their triggers, those commands are finally invoked following the close
      of the current event by event_triggers_post_call().
      
      To simplify the above and make it more efficient, the TRIGGER_COND bit
      is introduced, which is set only if a soft-disabled trigger needs to
      use the log record for filter testing or needs to wait until the
      current log record is closed.
      
      The syscall event invocation code is also changed in analogous ways.
      
      Because event triggers need to be able to create and free filters,
      this also adds a couple external wrappers for the existing
      create_filter and free_filter functions, which are too generic to be
      made extern functions themselves.
      
      Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bac5fb97
    • T
      tracing: Add 'enable_event' and 'disable_event' event trigger commands · 7862ad18
      Tom Zanussi 提交于
      Add 'enable_event' and 'disable_event' event_command commands.
      
      enable_event and disable_event event triggers are added by the user
      via these commands in a similar way and using practically the same
      syntax as the analagous 'enable_event' and 'disable_event' ftrace
      function commands, but instead of writing to the set_ftrace_filter
      file, the enable_event and disable_event triggers are written to the
      per-event 'trigger' files:
      
          echo 'enable_event:system:event' > .../othersys/otherevent/trigger
          echo 'disable_event:system:event' > .../othersys/otherevent/trigger
      
      The above commands will enable or disable the 'system:event' trace
      events whenever the othersys:otherevent events are hit.
      
      This also adds a 'count' version that limits the number of times the
      command will be invoked:
      
          echo 'enable_event:system:event:N' > .../othersys/otherevent/trigger
          echo 'disable_event:system:event:N' > .../othersys/otherevent/trigger
      
      Where N is the number of times the command will be invoked.
      
      The above commands will will enable or disable the 'system:event'
      trace events whenever the othersys:otherevent events are hit, but only
      N times.
      
      This also makes the find_event_file() helper function extern, since
      it's useful to use from other places, such as the event triggers code,
      so make it accessible.
      
      Link: http://lkml.kernel.org/r/f825f3048c3f6b026ee37ae5825f9fc373451828.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7862ad18
    • T
      tracing: Add 'stacktrace' event trigger command · f21ecbb3
      Tom Zanussi 提交于
      Add 'stacktrace' event_command.  stacktrace event triggers are added
      by the user via this command in a similar way and using practically
      the same syntax as the analogous 'stacktrace' ftrace function command,
      but instead of writing to the set_ftrace_filter file, the stacktrace
      event trigger is written to the per-event 'trigger' files:
      
          echo 'stacktrace' > .../tracing/events/somesys/someevent/trigger
      
      The above command will turn on stacktraces for someevent i.e. whenever
      someevent is hit, a stacktrace will be logged.
      
      This also adds a 'count' version that limits the number of times the
      command will be invoked:
      
          echo 'stacktrace:N' > .../tracing/events/somesys/someevent/trigger
      
      Where N is the number of times the command will be invoked.
      
      The above command will log N stacktraces for someevent i.e. whenever
      someevent is hit N times, a stacktrace will be logged.
      
      Link: http://lkml.kernel.org/r/0c30c008a0828c660aa0e1bbd3255cf179ed5c30.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f21ecbb3
    • T
      tracing: Add 'snapshot' event trigger command · 93e31ffb
      Tom Zanussi 提交于
      Add 'snapshot' event_command.  snapshot event triggers are added by
      the user via this command in a similar way and using practically the
      same syntax as the analogous 'snapshot' ftrace function command, but
      instead of writing to the set_ftrace_filter file, the snapshot event
      trigger is written to the per-event 'trigger' files:
      
          echo 'snapshot' > .../somesys/someevent/trigger
      
      The above command will turn on snapshots for someevent i.e. whenever
      someevent is hit, a snapshot will be done.
      
      This also adds a 'count' version that limits the number of times the
      command will be invoked:
      
          echo 'snapshot:N' > .../somesys/someevent/trigger
      
      Where N is the number of times the command will be invoked.
      
      The above command will snapshot N times for someevent i.e. whenever
      someevent is hit N times, a snapshot will be done.
      
      Also adds a new tracing_alloc_snapshot() function - the existing
      tracing_snapshot_alloc() function is a special version of
      tracing_snapshot() that also does the snapshot allocation - the
      snapshot triggers would like to be able to do just the allocation but
      not take a snapshot; the existing tracing_snapshot_alloc() in turn now
      also calls tracing_alloc_snapshot() underneath to do that allocation.
      
      Link: http://lkml.kernel.org/r/c9524dd07ce01f9dcbd59011290e0a8d5b47d7ad.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      [ fix up from kbuild test robot <fengguang.wu@intel.com report ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      93e31ffb
  9. 21 12月, 2013 2 次提交
    • T
      tracing: Add 'traceon' and 'traceoff' event trigger commands · 2a2df321
      Tom Zanussi 提交于
      Add 'traceon' and 'traceoff' event_command commands.  traceon and
      traceoff event triggers are added by the user via these commands in a
      similar way and using practically the same syntax as the analagous
      'traceon' and 'traceoff' ftrace function commands, but instead of
      writing to the set_ftrace_filter file, the traceon and traceoff
      triggers are written to the per-event 'trigger' files:
      
          echo 'traceon' > .../tracing/events/somesys/someevent/trigger
          echo 'traceoff' > .../tracing/events/somesys/someevent/trigger
      
      The above command will turn tracing on or off whenever someevent is
      hit.
      
      This also adds a 'count' version that limits the number of times the
      command will be invoked:
      
          echo 'traceon:N' > .../tracing/events/somesys/someevent/trigger
          echo 'traceoff:N' > .../tracing/events/somesys/someevent/trigger
      
      Where N is the number of times the command will be invoked.
      
      The above commands will will turn tracing on or off whenever someevent
      is hit, but only N times.
      
      Some common register/unregister_trigger() implementations of the
      event_command reg()/unreg() callbacks are also provided, which add and
      remove trigger instances to the per-event list of triggers, and
      arm/disarm them as appropriate.  event_trigger_callback() is a
      general-purpose event_command func() implementation that orchestrates
      command parsing and registration for most normal commands.
      
      Most event commands will use these, but some will override and
      possibly reuse them.
      
      The event_trigger_init(), event_trigger_free(), and
      event_trigger_print() functions are meant to be common implementations
      of the event_trigger_ops init(), free(), and print() ops,
      respectively.
      
      Most trigger_ops implementations will use these, but some will
      override and possibly reuse them.
      
      Link: http://lkml.kernel.org/r/00a52816703b98d2072947478dd6e2d70cde5197.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a2df321
    • T
      tracing: Add basic event trigger framework · 85f2b082
      Tom Zanussi 提交于
      Add a 'trigger' file for each trace event, enabling 'trace event
      triggers' to be set for trace events.
      
      'trace event triggers' are patterned after the existing 'ftrace
      function triggers' implementation except that triggers are written to
      per-event 'trigger' files instead of to a single file such as the
      'set_ftrace_filter' used for ftrace function triggers.
      
      The implementation is meant to be entirely separate from ftrace
      function triggers, in order to keep the respective implementations
      relatively simple and to allow them to diverge.
      
      The event trigger functionality is built on top of SOFT_DISABLE
      functionality.  It adds a TRIGGER_MODE bit to the ftrace_event_file
      flags which is checked when any trace event fires.  Triggers set for a
      particular event need to be checked regardless of whether that event
      is actually enabled or not - getting an event to fire even if it's not
      enabled is what's already implemented by SOFT_DISABLE mode, so trigger
      mode directly reuses that.  Event trigger essentially inherit the soft
      disable logic in __ftrace_event_enable_disable() while adding a bit of
      logic and trigger reference counting via tm_ref on top of that in a
      new trace_event_trigger_enable_disable() function.  Because the base
      __ftrace_event_enable_disable() code now needs to be invoked from
      outside trace_events.c, a wrapper is also added for those usages.
      
      The triggers for an event are actually invoked via a new function,
      event_triggers_call(), and code is also added to invoke them for
      ftrace_raw_event calls as well as syscall events.
      
      The main part of the patch creates a new trace_events_trigger.c file
      to contain the trace event triggers implementation.
      
      The standard open, read, and release file operations are implemented
      here.
      
      The open() implementation sets up for the various open modes of the
      'trigger' file.  It creates and attaches the trigger iterator and sets
      up the command parser.  If opened for reading set up the trigger
      seq_ops.
      
      The read() implementation parses the event trigger written to the
      'trigger' file, looks up the trigger command, and passes it along to
      that event_command's func() implementation for command-specific
      processing.
      
      The release() implementation does whatever cleanup is needed to
      release the 'trigger' file, like releasing the parser and trigger
      iterator, etc.
      
      A couple of functions for event command registration and
      unregistration are added, along with a list to add them to and a mutex
      to protect them, as well as an (initially empty) registration function
      to add the set of commands that will be added by future commits, and
      call to it from the trace event initialization code.
      
      also added are a couple trigger-specific data structures needed for
      these implementations such as a trigger iterator and a struct for
      trigger-specific data.
      
      A couple structs consisting mostly of function meant to be implemented
      in command-specific ways, event_command and event_trigger_ops, are
      used by the generic event trigger command implementations.  They're
      being put into trace.h alongside the other trace_event data structures
      and functions, in the expectation that they'll be needed in several
      trace_event-related files such as trace_events_trigger.c and
      trace_events.c.
      
      The event_command.func() function is meant to be called by the trigger
      parsing code in order to add a trigger instance to the corresponding
      event.  It essentially coordinates adding a live trigger instance to
      the event, and arming the triggering the event.
      
      Every event_command func() implementation essentially does the
      same thing for any command:
      
         - choose ops - use the value of param to choose either a number or
           count version of event_trigger_ops specific to the command
         - do the register or unregister of those ops
         - associate a filter, if specified, with the triggering event
      
      The reg() and unreg() ops allow command-specific implementations for
      event_trigger_op registration and unregistration, and the
      get_trigger_ops() op allows command-specific event_trigger_ops
      selection to be parameterized.  When a trigger instance is added, the
      reg() op essentially adds that trigger to the triggering event and
      arms it, while unreg() does the opposite.  The set_filter() function
      is used to associate a filter with the trigger - if the command
      doesn't specify a set_filter() implementation, the command will ignore
      filters.
      
      Each command has an associated trigger_type, which serves double duty,
      both as a unique identifier for the command as well as a value that
      can be used for setting a trigger mode bit during trigger invocation.
      
      The signature of func() adds a pointer to the event_command struct,
      used to invoke those functions, along with a command_data param that
      can be passed to the reg/unreg functions.  This allows func()
      implementations to use command-specific blobs and supports code
      re-use.
      
      The event_trigger_ops.func() command corrsponds to the trigger 'probe'
      function that gets called when the triggering event is actually
      invoked.  The other functions are used to list the trigger when
      needed, along with a couple mundane book-keeping functions.
      
      This also moves event_file_data() into trace.h so it can be used
      outside of trace_events.c.
      
      Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Idea-by: NSteve Rostedt <rostedt@goodmis.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85f2b082
  10. 19 11月, 2013 1 次提交
  11. 06 11月, 2013 1 次提交
    • T
      tracing: Update event filters for multibuffer · f306cc82
      Tom Zanussi 提交于
      The trace event filters are still tied to event calls rather than
      event files, which means you don't get what you'd expect when using
      filters in the multibuffer case:
      
      Before:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 2048
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      Setting the filter in tracing/instances/test1/events shouldn't affect
      the same event in tracing/events as it does above.
      
      After:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      We'd like to just move the filter directly from ftrace_event_call to
      ftrace_event_file, but there are a couple cases that don't yet have
      multibuffer support and therefore have to continue using the current
      event_call-based filters.  For those cases, a new USE_CALL_FILTER bit
      is added to the event_call flags, whose main purpose is to keep the
      old behavior for those cases until they can be updated with
      multibuffer support; at that point, the USE_CALL_FILTER flag (and the
      new associated call_filter_check_discard() function) can go away.
      
      The multibuffer support also made filter_current_check_discard()
      redundant, so this change removes that function as well and replaces
      it with filter_check_discard() (or call_filter_check_discard() as
      appropriate).
      
      Link: http://lkml.kernel.org/r/f16e9ce4270c62f46b2e966119225e1c3cca7e60.1382620672.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f306cc82
  12. 03 8月, 2013 1 次提交
  13. 01 8月, 2013 1 次提交
  14. 27 7月, 2013 1 次提交
    • S
      tracing: Add __tracepoint_string() to export string pointers · 102c9323
      Steven Rostedt (Red Hat) 提交于
      There are several tracepoints (mostly in RCU), that reference a string
      pointer and uses the print format of "%s" to display the string that
      exists in the kernel, instead of copying the actual string to the
      ring buffer (saves time and ring buffer space).
      
      But this has an issue with userspace tools that read the binary buffers
      that has the address of the string but has no access to what the string
      itself is. The end result is just output that looks like:
      
       rcu_dyntick:          ffffffff818adeaa 1 0
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_utilization:      ffffffff8184333b
       rcu_utilization:      ffffffff8184333b
      
      The above is pretty useless when read by the userspace tools. Ideally
      we would want something that looks like this:
      
       rcu_dyntick:          Start 1 0
       rcu_dyntick:          End 0 140000000000000
       rcu_dyntick:          Start 140000000000000 0
       rcu_callback:         rcu_preempt rhp=0xffff880037aff710 func=put_cred_rcu 0/4
       rcu_callback:         rcu_preempt rhp=0xffff880078961980 func=file_free_rcu 0/5
       rcu_dyntick:          End 0 1
      
      The trace_printk() which also only stores the address of the string
      format instead of recording the string into the buffer itself, exports
      the mapping of kernel addresses to format strings via the printk_format
      file in the debugfs tracing directory.
      
      The tracepoint strings can use this same method and output the format
      to the same file and the userspace tools will be able to decipher
      the address without any modification.
      
      The tracepoint strings need its own section to save the strings because
      the trace_printk section will cause the trace_printk() buffers to be
      allocated if anything exists within the section. trace_printk() is only
      used for debugging and should never exist in the kernel, we can not use
      the trace_printk sections.
      
      Add a new tracepoint_str section that will also be examined by the output
      of the printk_format file.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      102c9323
  15. 10 5月, 2013 1 次提交
    • M
      tracing: Modify soft-mode only if there's no other referrer · 1cf4c073
      Masami Hiramatsu 提交于
      Modify soft-mode flag only if no other soft-mode referrer
      (currently only the ftrace triggers) by using a reference
      counter in each ftrace_event_file.
      
      Without this fix, adding and removing several different
      enable/disable_event triggers on the same event clear
      soft-mode bit from the ftrace_event_file. This also
      happens with a typo of glob on setting triggers.
      
      e.g.
      
       # echo vfs_symlink:enable_event:net:netif_rx > set_ftrace_filter
       # cat events/net/netif_rx/enable
       0*
       # echo typo_func:enable_event:net:netif_rx > set_ftrace_filter
       # cat events/net/netif_rx/enable
       0
       # cat set_ftrace_filter
       #### all functions enabled ####
       vfs_symlink:enable_event:net:netif_rx:unlimited
      
      As above, we still have a trigger, but soft-mode is gone.
      
      Link: http://lkml.kernel.org/r/20130509054429.30398.7464.stgit@mhiramat-M0-7522
      
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: David Sharp <dhsharp@google.com>
      Cc: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Cc: Tom Zanussi <tom.zanussi@intel.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      1cf4c073
  16. 20 4月, 2013 1 次提交
  17. 15 3月, 2013 8 次提交
    • S
      tracing: Add a way to soft disable trace events · 417944c4
      Steven Rostedt (Red Hat) 提交于
      In order to let triggers enable or disable events, we need a 'soft'
      method for doing so. For example, if a function probe is added that
      lets a user enable or disable events when a function is called, that
      change must be done without taking locks or a mutex, and definitely
      it can't sleep. But the full enabling of a tracepoint is expensive.
      
      By adding a 'SOFT_DISABLE' flag, and converting the flags to be updated
      without the protection of a mutex (using set/clear_bit()), this soft
      disable flag can be used to allow critical sections to enable or disable
      events from being traced (after the event has been placed into "SOFT_MODE").
      
      Some caveats though: The comm recorder (to map pids with a comm) can not
      be soft disabled (yet). If you disable an event with with a "soft"
      disable and wait a while before reading the trace, the comm cache may be
      replaced and you'll get a bunch of <...> for comms in the trace.
      
      Reading the "enable" file for an event that is disabled will now give
      you "0*" where the '*' denotes that the tracepoint is still active but
      the event itself is "disabled".
      
      [ fixed _BIT used in & operation : thanks to Dan Carpenter and smatch ]
      
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      417944c4
    • S
      tracing: Fix comments for ftrace_event_file/call flags · 57d01ad0
      Steven Rostedt (Red Hat) 提交于
      Most of the flags for the struct ftrace_event_file were moved over
      to the flags of the struct ftrace_event_call, but the comments were
      never updated.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      57d01ad0
    • S
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) 提交于
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • S
      tracing: Only clear trace buffer on module unload if event was traced · 575380da
      Steven Rostedt (Red Hat) 提交于
      Currently, when a module with events is unloaded, the trace buffer is
      cleared. This is just a safety net in case the module might have some
      strange callback when its event is outputted. But there's no reason
      to reset the buffer if the module didn't have any of its events traced.
      
      Add a flag to the event "call" structure called WAS_ENABLED and gets set
      when the event is ever enabled, and this flag never gets cleared. When a
      module gets unloaded, if any of its events have this flag set, then the
      trace buffer will get cleared.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      575380da
    • S
      tracing: Add comment for trace event flag IGNORE_ENABLE · 2a30c11f
      Steven Rostedt (Red Hat) 提交于
      All the trace event flags have comments but the IGNORE_ENABLE flag
      which is set for ftrace internal events that should not be enabled
      via the debugfs "enable" file. That is, if the top level enable file
      is set, it will enable all events. It use to just check the ftrace
      event call descriptor "reg" field and skip those whithout it, but now
      some ftrace internal events have a reg field but still need to be
      skipped. The flag was created to ignore those events.
      
      Now document it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2a30c11f
    • L
      tracing: Add a helper function for event print functions · f71130de
      Li Zefan 提交于
      Move duplicate code in event print functions to a helper function.
      
      This shrinks the size of the kernel by ~13K.
      
         text    data     bss     dec     hex filename
      6596137 1743966 10138672        18478775        119f6b7 vmlinux.o.old
      6583002 1743849 10138672        18465523        119c2f3 vmlinux.o.new
      
      Link: http://lkml.kernel.org/r/51258746.2060304@huawei.comSigned-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f71130de
    • S
      tracing: Pass the ftrace_file to the buffer lock reserve code · ccb469a1
      Steven Rostedt 提交于
      Pass the struct ftrace_event_file *ftrace_file to the
      trace_event_buffer_lock_reserve() (new function that replaces the
      trace_current_buffer_lock_reserver()).
      
      The ftrace_file holds a pointer to the trace_array that is in use.
      In the case of multiple buffers with different trace_arrays, this
      allows different events to be recorded into different buffers.
      
      Also fixed some of the stale comments in include/trace/ftrace.h
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ccb469a1
    • S
      tracing: Separate out trace events from global variables · ae63b31e
      Steven Rostedt 提交于
      The trace events for ftrace are all defined via global variables.
      The arrays of events and event systems are linked to a global list.
      This prevents multiple users of the event system (what to enable and
      what not to).
      
      By adding descriptors to represent the event/file relation, as well
      as to which trace_array descriptor they are associated with, allows
      for more than one set of events to be defined. Once the trace events
      files have a link between the trace event and the trace_array they
      are associated with, we can create multiple trace_arrays that can
      record separate events in separate buffers.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      ae63b31e
  18. 31 1月, 2013 1 次提交
    • H
      tracing: Make a snapshot feature available from userspace · debdd57f
      Hiraku Toyooka 提交于
      Ftrace has a snapshot feature available from kernel space and
      latency tracers (e.g. irqsoff) are using it. This patch enables
      user applictions to take a snapshot via debugfs.
      
      Add "snapshot" debugfs file in "tracing" directory.
      
        snapshot:
          This is used to take a snapshot and to read the output of the
          snapshot.
      
           # echo 1 > snapshot
      
          This will allocate the spare buffer for snapshot (if it is
          not allocated), and take a snapshot.
      
           # cat snapshot
      
          This will show contents of the snapshot.
      
           # echo 0 > snapshot
      
          This will free the snapshot if it is allocated.
      
          Any other positive values will clear the snapshot contents if
          the snapshot is allocated, or return EINVAL if it is not allocated.
      
      Link: http://lkml.kernel.org/r/20121226025300.3252.86850.stgit@liselsia
      
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: David Sharp <dhsharp@google.com>
      Signed-off-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      [
         Fixed irqsoff selftest and also a conflict with a change
         that fixes the update_max_tr.
      ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      debdd57f
  19. 22 1月, 2013 2 次提交
    • S
      tracing: Remove the extra 4 bytes of padding in events · b000c806
      Steven Rostedt 提交于
      Due to a userspace issue with PowerTop v2beta, which hardcoded
      the offset of event fields that it was using, it broke when
      we removed the Big Kernel Lock counter from the event header.
      
       (commit e6e1e259 "tracing: Remove lock_depth from event entry")
      
      Because this broke userspace, it was determined that we must
      keep those 4 bytes around.
      
       (commit a3a4a5ac "Regression: partial revert "tracing: Remove lock_depth from event entry"")
      
      This unfortunately wastes space in the ring buffer. 4 bytes per
      event, where a lot of events are just 24 bytes. That's 16% of the
      buffer wasted. A million events will add 4 megs of white space
      into the buffer.
      
      It was later noticed that PowerTop v2beta could not work on systems
      where the kernel was 64 bit but the userspace was 32 bits.
      The reason was because the offsets are different between the
      two and the hard coded offset of one would not work with the other.
      
      With PowerTop v2 final, it implemented the same interface that both
      perf and trace-cmd use. That is, it reads the format file of
      the event to find the offsets of the fields it needs. This fixes
      the problem with running powertop on a 32 bit userspace running
      on a 64 bit kernel. It also no longer requires the 4 byte padding.
      
      As PowerTop v2 has been out for a while, and is included in all
      major distributions, it is time that we can safely remove the
      4 bytes of padding. Users of PowerTop v2beta should upgrade to
      PowerTop v2 final.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b000c806
    • S
      tracing: Fix sparse warning with is_signed_type() macro · 418c59e4
      Steven Rostedt 提交于
      Sparse complains when is_signed_type() is used on a pointer.
      This macro is needed for the format output used for ftrace
      and perf, to know if a binary field is a signed type or not.
      The is_signed_type() macro is used against all fields that are
      recorded by events to automate the operation.
      
      The problem sparse has is with the current way is_signed_type()
      works:
      
        ((type)-1 < 0)
      
      If "type" is a poiner, than sparse does not like it being compared
      to an integer (zero). The simple fix is to just give zero the
      same type. The runtime result stays the same.
      Reported-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      418c59e4
  20. 14 11月, 2012 1 次提交
    • D
      tracing: Format non-nanosec times from tsc clock without a decimal point. · 8be0709f
      David Sharp 提交于
      With the addition of the "tsc" clock, formatting timestamps to look like
      fractional seconds is misleading. Mark clocks as either in nanoseconds or
      not, and format non-nanosecond timestamps as decimal integers.
      
      Tested:
      $ cd /sys/kernel/debug/tracing/
      $ cat trace_clock
      [local] global tsc
      $ echo sched_switch > set_event
      $ echo 1 > tracing_on ; sleep 0.0005 ; echo 0 > tracing_on
      $ cat trace
                <idle>-0     [000]  6330.555552: sched_switch: prev_comm=swapper prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=bash next_pid=29964 next_prio=120
                 sleep-29964 [000]  6330.555628: sched_switch: prev_comm=bash prev_pid=29964 prev_prio=120 prev_state=S ==> next_comm=swapper next_pid=0 next_prio=120
        ...
      $ echo 1 > options/latency-format
      $ cat trace
        <idle>-0       0 4104553247us+: sched_switch: prev_comm=swapper prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=bash next_pid=29964 next_prio=120
         sleep-29964   0 4104553322us+: sched_switch: prev_comm=bash prev_pid=29964 prev_prio=120 prev_state=S ==> next_comm=swapper next_pid=0 next_prio=120
        ...
      $ echo tsc > trace_clock
      $ cat trace
      $ echo 1 > tracing_on ; sleep 0.0005 ; echo 0 > tracing_on
      $ echo 0 > options/latency-format
      $ cat trace
                <idle>-0     [000] 16490053398357: sched_switch: prev_comm=swapper prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=bash next_pid=31128 next_prio=120
                 sleep-31128 [000] 16490053588518: sched_switch: prev_comm=bash prev_pid=31128 prev_prio=120 prev_state=S ==> next_comm=swapper next_pid=0 next_prio=120
        ...
      echo 1 > options/latency-format
      $ cat trace
        <idle>-0       0 91557653238+: sched_switch: prev_comm=swapper prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=bash next_pid=31128 next_prio=120
         sleep-31128   0 91557843399+: sched_switch: prev_comm=bash prev_pid=31128 prev_prio=120 prev_state=S ==> next_comm=swapper next_pid=0 next_prio=120
        ...
      
      v2:
      Move arch-specific bits out of generic code.
      v4:
      Fix x86_32 build due to 64-bit division.
      
      Google-Bug-Id: 6980623
      Link: http://lkml.kernel.org/r/1352837903-32191-2-git-send-email-dhsharp@google.com
      
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NDavid Sharp <dhsharp@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8be0709f
  21. 02 11月, 2012 1 次提交
    • S
      tracing: Use irq_work for wake ups and remove *_nowake_*() functions · 0d5c6e1c
      Steven Rostedt 提交于
      Have the ring buffer commit function use the irq_work infrastructure to
      wake up any waiters waiting on the ring buffer for new data. The irq_work
      was created for such a purpose, where doing the actual wake up at the
      time of adding data is too dangerous, as an event or function trace may
      be in the midst of the work queue locks and cause deadlocks. The irq_work
      will either delay the action to the next timer interrupt, or trigger an IPI
      to itself forcing an interrupt to do the work (in a safe location).
      
      With irq_work, all ring buffer commits can safely do wakeups, removing
      the need for the ring buffer commit "nowake" variants, which were used
      by events and function tracing. All commits can now safely use the
      normal commit, and the "nowake" variants can be removed.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0d5c6e1c
  22. 31 7月, 2012 1 次提交
  23. 29 6月, 2012 1 次提交
    • S
      tracing: Remove NR_CPUS array from trace_iterator · 6d158a81
      Steven Rostedt 提交于
      Replace the NR_CPUS array of buffer_iter from the trace_iterator
      with an allocated array. This will just create an array of
      possible CPUS instead of the max number specified.
      
      The use of NR_CPUS in that array caused allocation failures for
      machines that were tight on memory. This did not cause any failures
      to the system itself (no crashes), but caused unnecessary failures
      for reading the trace files.
      
      Added a helper function called 'trace_buffer_iter()' that returns
      the buffer_iter item or NULL if it is not defined or the array was
      not allocated. Some routines do not require the array
      (tracing_open_pipe() for one).
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6d158a81
  24. 15 6月, 2012 1 次提交
    • S
      tracing: Add comments for the other bits of ftrace_event_call.flags · 5da43bed
      Steven Rostedt 提交于
      	TRACE_EVENT_FL_ENABLED_BIT,
      	TRACE_EVENT_FL_FILTERED_BIT,
      	TRACE_EVENT_FL_RECORDED_CMD_BIT,
      
      Have comments about what they are, but:
      
      	TRACE_EVENT_FL_CAP_ANY_BIT,
      	TRACE_EVENT_FL_NO_SET_FILTER_BIT,
      	TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
      
      do not, making them second class citizens. To prevent another
      class warfare, these bits have protested for their right to be
      commented. And By Golly! I'll give them what they want!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5da43bed
  25. 11 5月, 2012 1 次提交
    • S
      tracing: Do not enable function event with enable · 9b63776f
      Steven Rostedt 提交于
      With the adding of function tracing event to perf, it caused a
      side effect that produces the following warning when enabling all
      events in ftrace:
      
       # echo 1 > /sys/kernel/debug/tracing/events/enable
      
      [console]
      event trace: Could not enable event function
      
      This is because when enabling all events via the debugfs system
      it ignores events that do not have a ->reg() function assigned.
      This was to skip over the ftrace internal events (as they are
      not TRACE_EVENTs). But as the ftrace function event now has
      a ->reg() function attached to it for use with perf, it is no
      longer ignored.
      
      Worse yet, this ->reg() function is being called when it should
      not be. It returns an error and causes the above warning to
      be printed.
      
      By adding a new event_call flag (TRACE_EVENT_FL_IGNORE_ENABLE)
      and have all ftrace internel event structures have it set,
      setting the events/enable will no longe try to incorrectly enable
      the function event and does not warn.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9b63776f
  26. 14 3月, 2012 1 次提交