1. 24 1月, 2014 1 次提交
  2. 23 1月, 2014 2 次提交
  3. 20 1月, 2014 1 次提交
    • A
      tracing: Fix buggered tee(2) on tracing_pipe · 92fdd98c
      Al Viro 提交于
      In kernel/trace/trace.c we have this:
      static void tracing_pipe_buf_release(struct pipe_inode_info *pipe,
                                           struct pipe_buffer *buf)
      {
              __free_page(buf->page);
      }
      static const struct pipe_buf_operations tracing_pipe_buf_ops = {
              .can_merge              = 0,
              .map                    = generic_pipe_buf_map,
              .unmap                  = generic_pipe_buf_unmap,
              .confirm                = generic_pipe_buf_confirm,
              .release                = tracing_pipe_buf_release,
              .steal                  = generic_pipe_buf_steal,
              .get                    = generic_pipe_buf_get,
      };
      with
      void generic_pipe_buf_get(struct pipe_inode_info *pipe, struct pipe_buffer *buf)
      {
              page_cache_get(buf->page);
      }
      
      and I don't see anything that would've prevented tee(2) called on the pipe
      that got stuff spliced into it from that sucker.  ->ops->get() will be
      called, then buf gets copied into target pipe's ->bufs[] and eventually
      readers get to both copies of the buffer.  With
      	get_page(page)
      	look at that page
      	__free_page(page)
      	look at that page
      	__free_page(page)
      which is not a good thing, to put it mildly.  AFAICS, that ought to use
      the normal generic_pipe_buf_release() (aka page_cache_release(buf->page)),
      shouldn't it?
      
      [
       SDR - As trace_pipe just allocates the page with alloc_page(GFP_KERNEL),
        and doesn't do anything special with it (no LRU logic). The __free_page()
        should be fine, as it wont actually free a page with reference count.
        Maybe there's a chance to leak memory? Anyway, This change is at a minimum
        good for being symmetric with generic_pipe_buf_get, it is fine to add.
      ]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      [ SDR - Removed no longer used tracing_pipe_buf_release ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      92fdd98c
  4. 14 1月, 2014 1 次提交
    • S
      tracing: Have trace buffer point back to trace_array · dced341b
      Steven Rostedt (Red Hat) 提交于
      The trace buffer has a descriptor pointer that goes back to the trace
      array. But it was never assigned. Luckily, nothing uses it (yet), but
      it will in the future.
      
      Although nothing currently uses this, if any of the new features get
      backported to older kernels, and because this is such a simple change,
      I'm marking it for stable too.
      
      Cc: stable@vger.kernel.org # v3.10+
      Fixes: 12883efb "tracing: Consolidate max_tr into main trace_array structure"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      dced341b
  5. 03 1月, 2014 1 次提交
  6. 22 12月, 2013 1 次提交
    • T
      tracing: Add 'snapshot' event trigger command · 93e31ffb
      Tom Zanussi 提交于
      Add 'snapshot' event_command.  snapshot event triggers are added by
      the user via this command in a similar way and using practically the
      same syntax as the analogous 'snapshot' ftrace function command, but
      instead of writing to the set_ftrace_filter file, the snapshot event
      trigger is written to the per-event 'trigger' files:
      
          echo 'snapshot' > .../somesys/someevent/trigger
      
      The above command will turn on snapshots for someevent i.e. whenever
      someevent is hit, a snapshot will be done.
      
      This also adds a 'count' version that limits the number of times the
      command will be invoked:
      
          echo 'snapshot:N' > .../somesys/someevent/trigger
      
      Where N is the number of times the command will be invoked.
      
      The above command will snapshot N times for someevent i.e. whenever
      someevent is hit N times, a snapshot will be done.
      
      Also adds a new tracing_alloc_snapshot() function - the existing
      tracing_snapshot_alloc() function is a special version of
      tracing_snapshot() that also does the snapshot allocation - the
      snapshot triggers would like to be able to do just the allocation but
      not take a snapshot; the existing tracing_snapshot_alloc() in turn now
      also calls tracing_alloc_snapshot() underneath to do that allocation.
      
      Link: http://lkml.kernel.org/r/c9524dd07ce01f9dcbd59011290e0a8d5b47d7ad.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      [ fix up from kbuild test robot <fengguang.wu@intel.com report ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      93e31ffb
  7. 11 11月, 2013 1 次提交
  8. 07 11月, 2013 2 次提交
  9. 06 11月, 2013 3 次提交
    • C
      tracing: Open tracer when ftrace_dump_on_oops is used · b2f974d6
      Cody P Schafer 提交于
      With ftrace_dump_on_oops, we previously did not open the tracer in
      question, sometimes causing the trace output to be useless.
      
      For example, the function_graph tracer with tracing_thresh set dumped via
      ftrace_dump_on_oops would show a series of '}' indented at different levels,
      but no function names.
      
      call trace->open() (and do a few other fixups copied from the normal dump
      path) to make the output more intelligible.
      
      Link: http://lkml.kernel.org/r/1382554197-16961-1-git-send-email-cody@linux.vnet.ibm.comSigned-off-by: NCody P Schafer <cody@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b2f974d6
    • T
      tracing: Make register/unregister_ftrace_command __init · 38de93ab
      Tom Zanussi 提交于
      register/unregister_ftrace_command() are only ever called from __init
      functions, so can themselves be made __init.
      
      Also make register_snapshot_cmd() __init for the same reason.
      
      Link: http://lkml.kernel.org/r/d4042c8cadb7ae6f843ac9a89a24e1c6a3099727.1382620672.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      38de93ab
    • T
      tracing: Update event filters for multibuffer · f306cc82
      Tom Zanussi 提交于
      The trace event filters are still tied to event calls rather than
      event files, which means you don't get what you'd expect when using
      filters in the multibuffer case:
      
      Before:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 2048
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      Setting the filter in tracing/instances/test1/events shouldn't affect
      the same event in tracing/events as it does above.
      
      After:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      We'd like to just move the filter directly from ftrace_event_call to
      ftrace_event_file, but there are a couple cases that don't yet have
      multibuffer support and therefore have to continue using the current
      event_call-based filters.  For those cases, a new USE_CALL_FILTER bit
      is added to the event_call flags, whose main purpose is to keep the
      old behavior for those cases until they can be updated with
      multibuffer support; at that point, the USE_CALL_FILTER flag (and the
      new associated call_filter_check_discard() function) can go away.
      
      The multibuffer support also made filter_current_check_discard()
      redundant, so this change removes that function as well and replaces
      it with filter_check_discard() (or call_filter_check_discard() as
      appropriate).
      
      Link: http://lkml.kernel.org/r/f16e9ce4270c62f46b2e966119225e1c3cca7e60.1382620672.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f306cc82
  10. 19 10月, 2013 1 次提交
    • S
      tracing: Fix potential out-of-bounds in trace_get_user() · 057db848
      Steven Rostedt 提交于
      Andrey reported the following report:
      
      ERROR: AddressSanitizer: heap-buffer-overflow on address ffff8800359c99f3
      ffff8800359c99f3 is located 0 bytes to the right of 243-byte region [ffff8800359c9900, ffff8800359c99f3)
      Accessed by thread T13003:
        #0 ffffffff810dd2da (asan_report_error+0x32a/0x440)
        #1 ffffffff810dc6b0 (asan_check_region+0x30/0x40)
        #2 ffffffff810dd4d3 (__tsan_write1+0x13/0x20)
        #3 ffffffff811cd19e (ftrace_regex_release+0x1be/0x260)
        #4 ffffffff812a1065 (__fput+0x155/0x360)
        #5 ffffffff812a12de (____fput+0x1e/0x30)
        #6 ffffffff8111708d (task_work_run+0x10d/0x140)
        #7 ffffffff810ea043 (do_exit+0x433/0x11f0)
        #8 ffffffff810eaee4 (do_group_exit+0x84/0x130)
        #9 ffffffff810eafb1 (SyS_exit_group+0x21/0x30)
        #10 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Allocated by thread T5167:
        #0 ffffffff810dc778 (asan_slab_alloc+0x48/0xc0)
        #1 ffffffff8128337c (__kmalloc+0xbc/0x500)
        #2 ffffffff811d9d54 (trace_parser_get_init+0x34/0x90)
        #3 ffffffff811cd7b3 (ftrace_regex_open+0x83/0x2e0)
        #4 ffffffff811cda7d (ftrace_filter_open+0x2d/0x40)
        #5 ffffffff8129b4ff (do_dentry_open+0x32f/0x430)
        #6 ffffffff8129b668 (finish_open+0x68/0xa0)
        #7 ffffffff812b66ac (do_last+0xb8c/0x1710)
        #8 ffffffff812b7350 (path_openat+0x120/0xb50)
        #9 ffffffff812b8884 (do_filp_open+0x54/0xb0)
        #10 ffffffff8129d36c (do_sys_open+0x1ac/0x2c0)
        #11 ffffffff8129d4b7 (SyS_open+0x37/0x50)
        #12 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Shadow bytes around the buggy address:
        ffff8800359c9700: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
        ffff8800359c9780: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
        ffff8800359c9800: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9880: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>ffff8800359c9980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[03]fb
        ffff8800359c9a00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9a80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
        ffff8800359c9b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        ffff8800359c9c00: 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
      Shadow byte legend (one shadow byte represents 8 application bytes):
        Addressable:           00
        Partially addressable: 01 02 03 04 05 06 07
        Heap redzone:          fa
        Heap kmalloc redzone:  fb
        Freed heap region:     fd
        Shadow gap:            fe
      
      The out-of-bounds access happens on 'parser->buffer[parser->idx] = 0;'
      
      Although the crash happened in ftrace_regex_open() the real bug
      occurred in trace_get_user() where there's an incrementation to
      parser->idx without a check against the size. The way it is triggered
      is if userspace sends in 128 characters (EVENT_BUF_SIZE + 1), the loop
      that reads the last character stores it and then breaks out because
      there is no more characters. Then the last character is read to determine
      what to do next, and the index is incremented without checking size.
      
      Then the caller of trace_get_user() usually nulls out the last character
      with a zero, but since the index is equal to the size, it writes a nul
      character after the allocated space, which can corrupt memory.
      
      Luckily, only root user has write access to this file.
      
      Link: http://lkml.kernel.org/r/20131009222323.04fd1a0d@gandalf.local.homeReported-by: NAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      057db848
  11. 10 10月, 2013 1 次提交
  12. 23 8月, 2013 1 次提交
  13. 03 8月, 2013 3 次提交
  14. 26 7月, 2013 1 次提交
  15. 24 7月, 2013 7 次提交
  16. 20 7月, 2013 1 次提交
  17. 19 7月, 2013 2 次提交
  18. 16 7月, 2013 1 次提交
  19. 03 7月, 2013 3 次提交
  20. 02 7月, 2013 5 次提交
  21. 20 6月, 2013 1 次提交
    • S
      tracing: Disable tracing on warning · de7edd31
      Steven Rostedt (Red Hat) 提交于
      Add a traceoff_on_warning option in both the kernel command line as well
      as a sysctl option. When set, any WARN*() function that is hit will cause
      the tracing_on variable to be cleared, which disables writing to the
      ring buffer.
      
      This is useful especially when tracing a bug with function tracing. When
      a warning is hit, the print caused by the warning can flood the trace with
      the functions that producing the output for the warning. This can make the
      resulting trace useless by either hiding where the bug happened, or worse,
      by overflowing the buffer and losing the trace of the bug totally.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      de7edd31