1. 03 2月, 2021 1 次提交
    • S
      tracing: Merge irqflags + preempt counter. · 36590c50
      Sebastian Andrzej Siewior 提交于
      The state of the interrupts (irqflags) and the preemption counter are
      both passed down to tracing_generic_entry_update(). Only one bit of
      irqflags is actually required: The on/off state. The complete 32bit
      of the preemption counter isn't needed. Just whether of the upper bits
      (softirq, hardirq and NMI) are set and the preemption depth is needed.
      
      The irqflags and the preemption counter could be evaluated early and the
      information stored in an integer `trace_ctx'.
      tracing_generic_entry_update() would use the upper bits as the
      TRACE_FLAG_* and the lower 8bit as the disabled-preemption depth
      (considering that one must be substracted from the counter in one
      special cases).
      
      The actual preemption value is not used except for the tracing record.
      The `irqflags' variable is mostly used only for the tracing record. An
      exception here is for instance wakeup_tracer_call() or
      probe_wakeup_sched_switch() which explicilty disable interrupts and use
      that `irqflags' to save (and restore) the IRQ state and to record the
      state.
      
      Struct trace_event_buffer has also the `pc' and flags' members which can
      be replaced with `trace_ctx' since their actual value is not used
      outside of trace recording.
      
      This will reduce tracing_generic_entry_update() to simply assign values
      to struct trace_entry. The evaluation of the TRACE_FLAG_* bits is moved
      to _tracing_gen_ctx_flags() which replaces preempt_count() and
      local_save_flags() invocations.
      
      As an example, ftrace_syscall_enter() may invoke:
      - trace_buffer_lock_reserve() -> … -> tracing_generic_entry_update()
      - event_trigger_unlock_commit()
        -> ftrace_trace_stack() -> … -> tracing_generic_entry_update()
        -> ftrace_trace_userstack() -> … -> tracing_generic_entry_update()
      
      In this case the TRACE_FLAG_* bits were evaluated three times. By using
      the `trace_ctx' they are evaluated once and assigned three times.
      
      A build with all tracers enabled on x86-64 with and without the patch:
      
          text     data      bss      dec      hex    filename
      21970669 17084168  7639260 46694097  2c87ed1 vmlinux.old
      21970293 17084168  7639260 46693721  2c87d59 vmlinux.new
      
      text shrank by 379 bytes, data remained constant.
      
      Link: https://lkml.kernel.org/r/20210125194511.3924915-2-bigeasy@linutronix.deSigned-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      36590c50
  2. 26 10月, 2020 1 次提交
  3. 20 3月, 2020 1 次提交
    • S
      tracing: Save off entry when peeking at next entry · ff895103
      Steven Rostedt (VMware) 提交于
      In order to have the iterator read the buffer even when it's still updating,
      it requires that the ring buffer iterator saves each event in a separate
      location outside the ring buffer such that its use is immutable.
      
      There's one use case that saves off the event returned from the ring buffer
      interator and calls it again to look at the next event, before going back to
      use the first event. As the ring buffer iterator will only have a single
      copy, this use case will no longer be supported.
      
      Instead, have the one use case create its own buffer to store the first
      event when looking at the next event. This way, when looking at the first
      event again, it wont be corrupted by the second read.
      
      Link: http://lkml.kernel.org/r/20200317213415.722539921@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      ff895103
  4. 11 2月, 2020 1 次提交
  5. 02 2月, 2020 1 次提交
  6. 30 1月, 2020 6 次提交
  7. 14 1月, 2020 3 次提交
  8. 27 11月, 2019 1 次提交
    • P
      ftrace: Rework event_create_dir() · 04ae87a5
      Peter Zijlstra 提交于
      Rework event_create_dir() to use an array of static data instead of
      function pointers where possible.
      
      The problem is that it would call the function pointer on module load
      before parse_args(), possibly even before jump_labels were initialized.
      Luckily the generated functions don't use jump_labels but it still seems
      fragile. It also gets in the way of changing when we make the module map
      executable.
      
      The generated function are basically calling trace_define_field() with a
      bunch of static arguments. So instead of a function, capture these
      arguments in a static array, avoiding the function call.
      
      Now there are a number of cases where the fields are dynamic (syscall
      arguments, kprobes and uprobes), in which case a static array does not
      work, for these we preserve the function call. Luckily all these cases
      are not related to modules and so we can retain the function call for
      them.
      
      Also fix up all broken tracepoint definitions that now generate a
      compile error.
      Tested-by: NAlexei Starovoitov <ast@kernel.org>
      Tested-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20191111132458.342979914@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      04ae87a5
  9. 23 11月, 2019 1 次提交
  10. 15 11月, 2019 1 次提交
  11. 31 8月, 2019 1 次提交
  12. 17 7月, 2019 2 次提交
  13. 19 12月, 2018 1 次提交
  14. 11 12月, 2018 1 次提交
  15. 09 12月, 2018 1 次提交
    • M
      tracing: Lock event_mutex before synth_event_mutex · fc800a10
      Masami Hiramatsu 提交于
      synthetic event is using synth_event_mutex for protecting
      synth_event_list, and event_trigger_write() path acquires
      locks as below order.
      
      event_trigger_write(event_mutex)
        ->trigger_process_regex(trigger_cmd_mutex)
          ->event_hist_trigger_func(synth_event_mutex)
      
      On the other hand, synthetic event creation and deletion paths
      call trace_add_event_call() and trace_remove_event_call()
      which acquires event_mutex. In that case, if we keep the
      synth_event_mutex locked while registering/unregistering synthetic
      events, its dependency will be inversed.
      
      To avoid this issue, current synthetic event is using a 2 phase
      process to create/delete events. For example, it searches existing
      events under synth_event_mutex to check for event-name conflicts, and
      unlocks synth_event_mutex, then registers a new event under event_mutex
      locked. Finally, it locks synth_event_mutex and tries to add the
      new event to the list. But it can introduce complexity and a chance
      for name conflicts.
      
      To solve this simpler, this introduces trace_add_event_call_nolock()
      and trace_remove_event_call_nolock() which don't acquire
      event_mutex inside. synthetic event can lock event_mutex before
      synth_event_mutex to solve the lock dependency issue simpler.
      
      Link: http://lkml.kernel.org/r/154140844377.17322.13781091165954002713.stgit@devboxReviewed-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Tested-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      fc800a10
  16. 11 10月, 2018 1 次提交
  17. 29 5月, 2018 1 次提交
    • S
      tracing: Do not reference event data in post call triggers · c94e45bc
      Steven Rostedt (VMware) 提交于
      Trace event triggers can be called before or after the event has been
      committed. If it has been called after the commit, there's a possibility
      that the event no longer exists. Currently, the two post callers is the
      trigger to disable tracing (traceoff) and the one that will record a stack
      dump (stacktrace). Neither of them reference the trace event entry record,
      as that would lead to a race condition that could pass in corrupted data.
      
      To prevent any other users of the post data triggers from using the trace
      event record, pass in NULL to the post call trigger functions for the event
      record, as they should never need to use them in the first place.
      
      This does not fix any bug, but prevents bugs from happening by new post call
      trigger users.
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      c94e45bc
  18. 25 5月, 2018 1 次提交
    • Y
      bpf: introduce bpf subcommand BPF_TASK_FD_QUERY · 41bdc4b4
      Yonghong Song 提交于
      Currently, suppose a userspace application has loaded a bpf program
      and attached it to a tracepoint/kprobe/uprobe, and a bpf
      introspection tool, e.g., bpftool, wants to show which bpf program
      is attached to which tracepoint/kprobe/uprobe. Such attachment
      information will be really useful to understand the overall bpf
      deployment in the system.
      
      There is a name field (16 bytes) for each program, which could
      be used to encode the attachment point. There are some drawbacks
      for this approaches. First, bpftool user (e.g., an admin) may not
      really understand the association between the name and the
      attachment point. Second, if one program is attached to multiple
      places, encoding a proper name which can imply all these
      attachments becomes difficult.
      
      This patch introduces a new bpf subcommand BPF_TASK_FD_QUERY.
      Given a pid and fd, if the <pid, fd> is associated with a
      tracepoint/kprobe/uprobe perf event, BPF_TASK_FD_QUERY will return
         . prog_id
         . tracepoint name, or
         . k[ret]probe funcname + offset or kernel addr, or
         . u[ret]probe filename + offset
      to the userspace.
      The user can use "bpftool prog" to find more information about
      bpf program itself with prog_id.
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      41bdc4b4
  19. 29 3月, 2018 1 次提交
    • A
      bpf: introduce BPF_RAW_TRACEPOINT · c4f6699d
      Alexei Starovoitov 提交于
      Introduce BPF_PROG_TYPE_RAW_TRACEPOINT bpf program type to access
      kernel internal arguments of the tracepoints in their raw form.
      
      >From bpf program point of view the access to the arguments look like:
      struct bpf_raw_tracepoint_args {
             __u64 args[0];
      };
      
      int bpf_prog(struct bpf_raw_tracepoint_args *ctx)
      {
        // program can read args[N] where N depends on tracepoint
        // and statically verified at program load+attach time
      }
      
      kprobe+bpf infrastructure allows programs access function arguments.
      This feature allows programs access raw tracepoint arguments.
      
      Similar to proposed 'dynamic ftrace events' there are no abi guarantees
      to what the tracepoints arguments are and what their meaning is.
      The program needs to type cast args properly and use bpf_probe_read()
      helper to access struct fields when argument is a pointer.
      
      For every tracepoint __bpf_trace_##call function is prepared.
      In assembler it looks like:
      (gdb) disassemble __bpf_trace_xdp_exception
      Dump of assembler code for function __bpf_trace_xdp_exception:
         0xffffffff81132080 <+0>:     mov    %ecx,%ecx
         0xffffffff81132082 <+2>:     jmpq   0xffffffff811231f0 <bpf_trace_run3>
      
      where
      
      TRACE_EVENT(xdp_exception,
              TP_PROTO(const struct net_device *dev,
                       const struct bpf_prog *xdp, u32 act),
      
      The above assembler snippet is casting 32-bit 'act' field into 'u64'
      to pass into bpf_trace_run3(), while 'dev' and 'xdp' args are passed as-is.
      All of ~500 of __bpf_trace_*() functions are only 5-10 byte long
      and in total this approach adds 7k bytes to .text.
      
      This approach gives the lowest possible overhead
      while calling trace_xdp_exception() from kernel C code and
      transitioning into bpf land.
      Since tracepoint+bpf are used at speeds of 1M+ events per second
      this is valuable optimization.
      
      The new BPF_RAW_TRACEPOINT_OPEN sys_bpf command is introduced
      that returns anon_inode FD of 'bpf-raw-tracepoint' object.
      
      The user space looks like:
      // load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type
      prog_fd = bpf_prog_load(...);
      // receive anon_inode fd for given bpf_raw_tracepoint with prog attached
      raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd);
      
      Ctrl-C of tracing daemon or cmdline tool that uses this feature
      will automatically detach bpf program, unload it and
      unregister tracepoint probe.
      
      On the kernel side the __bpf_raw_tp_map section of pointers to
      tracepoint definition and to __bpf_trace_*() probe function is used
      to find a tracepoint with "xdp_exception" name and
      corresponding __bpf_trace_xdp_exception() probe function
      which are passed to tracepoint_probe_register() to connect probe
      with tracepoint.
      
      Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf
      tracepoint mechanisms. perf_event_open() can be used in parallel
      on the same tracepoint.
      Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted.
      Each with its own bpf program. The kernel will execute
      all tracepoint probes and all attached bpf programs.
      
      In the future bpf_raw_tracepoints can be extended with
      query/introspection logic.
      
      __bpf_raw_tp_map section logic was contributed by Steven Rostedt
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      c4f6699d
  20. 11 3月, 2018 1 次提交
  21. 06 2月, 2018 2 次提交
  22. 14 12月, 2017 1 次提交
    • Y
      bpf/tracing: fix kernel/events/core.c compilation error · f4e2298e
      Yonghong Song 提交于
      Commit f371b304 ("bpf/tracing: allow user space to
      query prog array on the same tp") introduced a perf
      ioctl command to query prog array attached to the
      same perf tracepoint. The commit introduced a
      compilation error under certain config conditions, e.g.,
        (1). CONFIG_BPF_SYSCALL is not defined, or
        (2). CONFIG_TRACING is defined but neither CONFIG_UPROBE_EVENTS
             nor CONFIG_KPROBE_EVENTS is defined.
      
      Error message:
        kernel/events/core.o: In function `perf_ioctl':
        core.c:(.text+0x98c4): undefined reference to `bpf_event_query_prog_array'
      
      This patch fixed this error by guarding the real definition under
      CONFIG_BPF_EVENTS and provided static inline dummy function
      if CONFIG_BPF_EVENTS was not defined.
      It renamed the function from bpf_event_query_prog_array to
      perf_event_query_prog_array and moved the definition from linux/bpf.h
      to linux/trace_events.h so the definition is in proximity to
      other prog_array related functions.
      
      Fixes: f371b304 ("bpf/tracing: allow user space to query prog array on the same tp")
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      f4e2298e
  23. 13 12月, 2017 1 次提交
  24. 11 11月, 2017 2 次提交
  25. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  26. 25 10月, 2017 1 次提交
    • Y
      bpf: permit multiple bpf attachments for a single perf event · e87c6bc3
      Yonghong Song 提交于
      This patch enables multiple bpf attachments for a
      kprobe/uprobe/tracepoint single trace event.
      Each trace_event keeps a list of attached perf events.
      When an event happens, all attached bpf programs will
      be executed based on the order of attachment.
      
      A global bpf_event_mutex lock is introduced to protect
      prog_array attaching and detaching. An alternative will
      be introduce a mutex lock in every trace_event_call
      structure, but it takes a lot of extra memory.
      So a global bpf_event_mutex lock is a good compromise.
      
      The bpf prog detachment involves allocation of memory.
      If the allocation fails, a dummy do-nothing program
      will replace to-be-detached program in-place.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e87c6bc3
  27. 17 10月, 2017 2 次提交
  28. 21 9月, 2017 1 次提交
    • Y
      bpf: one perf event close won't free bpf program attached by another perf event · ec9dd352
      Yonghong Song 提交于
      This patch fixes a bug exhibited by the following scenario:
        1. fd1 = perf_event_open with attr.config = ID1
        2. attach bpf program prog1 to fd1
        3. fd2 = perf_event_open with attr.config = ID1
           <this will be successful>
        4. user program closes fd2 and prog1 is detached from the tracepoint.
        5. user program with fd1 does not work properly as tracepoint
           no output any more.
      
      The issue happens at step 4. Multiple perf_event_open can be called
      successfully, but only one bpf prog pointer in the tp_event. In the
      current logic, any fd release for the same tp_event will free
      the tp_event->prog.
      
      The fix is to free tp_event->prog only when the closing fd
      corresponds to the one which registered the program.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec9dd352
  29. 01 9月, 2017 1 次提交
    • S
      tracing: Only have rmmod clear buffers that its events were active in · 065e63f9
      Steven Rostedt (VMware) 提交于
      Currently, when a module event is enabled, when that module is removed, it
      clears all ring buffers. This is to prevent another module from being loaded
      and having one of its trace event IDs from reusing a trace event ID of the
      removed module. This could cause undesirable effects as the trace event of
      the new module would be using its own processing algorithms to process raw
      data of another event. To prevent this, when a module is loaded, if any of
      its events have been used (signified by the WAS_ENABLED event call flag,
      which is never cleared), all ring buffers are cleared, just in case any one
      of them contains event data of the removed event.
      
      The problem is, there's no reason to clear all ring buffers if only one (or
      less than all of them) uses one of the events. Instead, only clear the ring
      buffers that recorded the events of a module that is being removed.
      
      To do this, instead of keeping the WAS_ENABLED flag with the trace event
      call, move it to the per instance (per ring buffer) event file descriptor.
      The event file descriptor maps each event to a separate ring buffer
      instance. Then when the module is removed, only the ring buffers that
      activated one of the module's events get cleared. The rest are not touched.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      065e63f9