1. 20 2月, 2016 1 次提交
    • Y
      tracing, kasan: Silence Kasan warning in check_stack of stack_tracer · 6e22c836
      Yang Shi 提交于
      When enabling stack trace via "echo 1 > /proc/sys/kernel/stack_tracer_enabled",
      the below KASAN warning is triggered:
      
      BUG: KASAN: stack-out-of-bounds in check_stack+0x344/0x848 at addr ffffffc0689ebab8
      Read of size 8 by task ksoftirqd/4/29
      page:ffffffbdc3a27ac0 count:0 mapcount:0 mapping:          (null) index:0x0
      flags: 0x0()
      page dumped because: kasan: bad access detected
      CPU: 4 PID: 29 Comm: ksoftirqd/4 Not tainted 4.5.0-rc1 #129
      Hardware name: Freescale Layerscape 2085a RDB Board (DT)
      Call trace:
      [<ffffffc000091300>] dump_backtrace+0x0/0x3a0
      [<ffffffc0000916c4>] show_stack+0x24/0x30
      [<ffffffc0009bbd78>] dump_stack+0xd8/0x168
      [<ffffffc000420bb0>] kasan_report_error+0x6a0/0x920
      [<ffffffc000421688>] kasan_report+0x70/0xb8
      [<ffffffc00041f7f0>] __asan_load8+0x60/0x78
      [<ffffffc0002e05c4>] check_stack+0x344/0x848
      [<ffffffc0002e0c8c>] stack_trace_call+0x1c4/0x370
      [<ffffffc0002af558>] ftrace_ops_no_ops+0x2c0/0x590
      [<ffffffc00009f25c>] ftrace_graph_call+0x0/0x14
      [<ffffffc0000881bc>] fpsimd_thread_switch+0x24/0x1e8
      [<ffffffc000089864>] __switch_to+0x34/0x218
      [<ffffffc0011e089c>] __schedule+0x3ac/0x15b8
      [<ffffffc0011e1f6c>] schedule+0x5c/0x178
      [<ffffffc0001632a8>] smpboot_thread_fn+0x350/0x960
      [<ffffffc00015b518>] kthread+0x1d8/0x2b0
      [<ffffffc0000874d0>] ret_from_fork+0x10/0x40
      Memory state around the buggy address:
       ffffffc0689eb980: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 f4 f4 f4
       ffffffc0689eba00: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
      >ffffffc0689eba80: 00 00 f1 f1 f1 f1 00 f4 f4 f4 f3 f3 f3 f3 00 00
                                              ^
       ffffffc0689ebb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
       ffffffc0689ebb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      
      The stacker tracer traverses the whole kernel stack when saving the max stack
      trace. It may touch the stack red zones to cause the warning. So, just disable
      the instrumentation to silence the warning.
      
      Link: http://lkml.kernel.org/r/1455309960-18930-1-git-send-email-yang.shi@linaro.orgSigned-off-by: NYang Shi <yang.shi@linaro.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6e22c836
  2. 30 1月, 2016 1 次提交
  3. 29 1月, 2016 1 次提交
  4. 14 1月, 2016 1 次提交
    • S
      tracing: Fix stacktrace skip depth in trace_buffer_unlock_commit_regs() · 7717c6be
      Steven Rostedt (Red Hat) 提交于
      While cleaning the stacktrace code I unintentially changed the skip depth of
      trace_buffer_unlock_commit_regs() from 0 to 6. kprobes uses this function,
      and with skipping 6 call backs, it can easily produce no stack.
      
      Here's how I tested it:
      
       # echo 'p:ext4_sync_fs ext4_sync_fs ' > /sys/kernel/debug/tracing/kprobe_events
       # echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
       # cat /sys/kernel/debug/trace
                  sync-2394  [005]   502.457060: ext4_sync_fs: (ffffffff81317650)
                  sync-2394  [005]   502.457063: kernel_stack:         <stack trace>
                  sync-2394  [005]   502.457086: ext4_sync_fs: (ffffffff81317650)
                  sync-2394  [005]   502.457087: kernel_stack:         <stack trace>
                  sync-2394  [005]   502.457091: ext4_sync_fs: (ffffffff81317650)
      
      After putting back the skip stack to zero, we have:
      
                  sync-2270  [000]   748.052693: ext4_sync_fs: (ffffffff81317650)
                  sync-2270  [000]   748.052695: kernel_stack:         <stack trace>
       => iterate_supers (ffffffff8126412e)
       => sys_sync (ffffffff8129c4b6)
       => entry_SYSCALL_64_fastpath (ffffffff8181f0b2)
                  sync-2270  [000]   748.053017: ext4_sync_fs: (ffffffff81317650)
                  sync-2270  [000]   748.053019: kernel_stack:         <stack trace>
       => iterate_supers (ffffffff8126412e)
       => sys_sync (ffffffff8129c4b6)
       => entry_SYSCALL_64_fastpath (ffffffff8181f0b2)
                  sync-2270  [000]   748.053381: ext4_sync_fs: (ffffffff81317650)
                  sync-2270  [000]   748.053383: kernel_stack:         <stack trace>
       => iterate_supers (ffffffff8126412e)
       => sys_sync (ffffffff8129c4b6)
       => entry_SYSCALL_64_fastpath (ffffffff8181f0b2)
      
      Cc: stable@vger.kernel.org # v4.4+
      Fixes: 73dddbb5 "tracing: Only create stacktrace option when STACKTRACE is configured"
      Reported-by: NBrendan Gregg <brendan.d.gregg@gmail.com>
      Tested-by: NBrendan Gregg <brendan.d.gregg@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7717c6be
  5. 08 1月, 2016 2 次提交
    • Q
      ftrace: Fix the race between ftrace and insmod · 5156dca3
      Qiu Peiyang 提交于
      We hit ftrace_bug report when booting Android on a 64bit ATOM SOC chip.
      Basically, there is a race between insmod and ftrace_run_update_code.
      
      After load_module=>ftrace_module_init, another thread jumps in to call
      ftrace_run_update_code=>ftrace_arch_code_modify_prepare
                              =>set_all_modules_text_rw, to change all modules
      as RW. Since the new module is at MODULE_STATE_UNFORMED, the text attribute
      is not changed. Then, the 2nd thread goes ahead to change codes.
      However, load_module continues to call complete_formation=>set_section_ro_nx,
      then 2nd thread would fail when probing the module's TEXT.
      
      The patch fixes it by using notifier to delay the enabling of ftrace
      records to the time when module is at state MODULE_STATE_COMING.
      
      Link: http://lkml.kernel.org/r/567CE628.3000609@intel.comSigned-off-by: NQiu Peiyang <peiyangx.qiu@intel.com>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5156dca3
    • S
      ftrace: Add infrastructure for delayed enabling of module functions · b7ffffbb
      Steven Rostedt (Red Hat) 提交于
      Qiu Peiyang pointed out that there's a race when enabling function tracing
      and loading a module. In order to make the modifications of converting nops
      in the prologue of functions into callbacks, the text needs to be converted
      from read-only to read-write. When enabling function tracing, the text
      permission is updated, the functions are modified, and then they are put
      back.
      
      When loading a module, the updates to convert function calls to mcount is
      done before the module text is set to read-only. But after it is done, the
      module text is visible by the function tracer. Thus we have the following
      race:
      
      	CPU 0			CPU 1
      	-----			-----
         start function tracing
         set text to read-write
      			     load_module
      			     add functions to ftrace
      			     set module text read-only
      
         update all functions to callbacks
         modify module functions too
         < Can't it's read-only >
      
      When this happens, ftrace detects the issue and disables itself till the
      next reboot.
      
      To fix this, a new DISABLED flag is added for ftrace records, which all
      module functions get when they are added. Then later, after the module code
      is all set, the records will have the DISABLED flag cleared, and they will
      be enabled if any callback wants all functions to be traced.
      
      Note, this doesn't add the delay to later. It simply changes the
      ftrace_module_init() to do both the setting of DISABLED records, and then
      immediately calls the enable code. This helps with testing this new code as
      it has the same behavior as previously. Another change will come after this
      to have the ftrace_module_enable() called after the text is set to
      read-only.
      
      Cc: Qiu Peiyang <peiyangx.qiu@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b7ffffbb
  6. 05 1月, 2016 1 次提交
    • Q
      tracing: Fix setting of start_index in find_next() · f36d1be2
      Qiu Peiyang 提交于
      When we do cat /sys/kernel/debug/tracing/printk_formats, we hit kernel
      panic at t_show.
      
      general protection fault: 0000 [#1] PREEMPT SMP
      CPU: 0 PID: 2957 Comm: sh Tainted: G W  O 3.14.55-x86_64-01062-gd4acdc7 #2
      RIP: 0010:[<ffffffff811375b2>]
       [<ffffffff811375b2>] t_show+0x22/0xe0
      RSP: 0000:ffff88002b4ebe80  EFLAGS: 00010246
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000004
      RDX: 0000000000000004 RSI: ffffffff81fd26a6 RDI: ffff880032f9f7b1
      RBP: ffff88002b4ebe98 R08: 0000000000001000 R09: 000000000000ffec
      R10: 0000000000000000 R11: 000000000000000f R12: ffff880004d9b6c0
      R13: 7365725f6d706400 R14: ffff880004d9b6c0 R15: ffffffff82020570
      FS:  0000000000000000(0000) GS:ffff88003aa00000(0063) knlGS:00000000f776bc40
      CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
      CR2: 00000000f6c02ff0 CR3: 000000002c2b3000 CR4: 00000000001007f0
      Call Trace:
       [<ffffffff811dc076>] seq_read+0x2f6/0x3e0
       [<ffffffff811b749b>] vfs_read+0x9b/0x160
       [<ffffffff811b7f69>] SyS_read+0x49/0xb0
       [<ffffffff81a3a4b9>] ia32_do_call+0x13/0x13
       ---[ end trace 5bd9eb630614861e ]---
      Kernel panic - not syncing: Fatal exception
      
      When the first time find_next calls find_next_mod_format, it should
      iterate the trace_bprintk_fmt_list to find the first print format of
      the module. However in current code, start_index is smaller than *pos
      at first, and code will not iterate the list. Latter container_of will
      get the wrong address with former v, which will cause mod_fmt be a
      meaningless object and so is the returned mod_fmt->fmt.
      
      This patch will fix it by correcting the start_index. After fixed,
      when the first time calls find_next_mod_format, start_index will be
      equal to *pos, and code will iterate the trace_bprintk_fmt_list to
      get the right module printk format, so is the returned mod_fmt->fmt.
      
      Link: http://lkml.kernel.org/r/5684B900.9000309@intel.com
      
      Cc: stable@vger.kernel.org # 3.12+
      Fixes: 102c9323 "tracing: Add __tracepoint_string() to export string pointers"
      Signed-off-by: NQiu Peiyang <peiyangx.qiu@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f36d1be2
  7. 04 1月, 2016 2 次提交
  8. 24 12月, 2015 8 次提交
  9. 02 12月, 2015 1 次提交
    • S
      tracing: Add sched_wakeup_new and sched_waking tracepoints for pid filter · 0f72e37e
      Steven Rostedt (Red Hat) 提交于
      The set_event_pid filter relies on attaching to the sched_switch and
      sched_wakeup tracepoints to see if it should filter the tracing on schedule
      tracepoints. By adding the callbacks to sched_wakeup, pids in the
      set_event_pid file will trace the wakeups of those tasks with those pids.
      
      But sched_wakeup_new and sched_waking were missed. These two should also be
      traced. Luckily, these tracepoints share the same class as sched_wakeup
      which means they can use the same pre and post callbacks as sched_wakeup
      does.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0f72e37e
  10. 26 11月, 2015 5 次提交
  11. 24 11月, 2015 4 次提交
    • S
      ring-buffer: Remove redundant update of page timestamp · 70004986
      Steven Rostedt (Red Hat) 提交于
      The first commit of a buffer page updates the timestamp of that page. No
      need to have the update to the next page add the timestamp too. It will only
      be replaced by the first commit on that page anyway.
      
      Only update to a page if it contains an event.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      70004986
    • S
      ring-buffer: Use READ_ONCE() for most tail_page access · 8573636e
      Steven Rostedt (Red Hat) 提交于
      As cpu_buffer->tail_page may be modified by interrupts at almost any time,
      the flow of logic is very important. Do not let gcc get smart with
      re-reading cpu_buffer->tail_page by adding READ_ONCE() around most of its
      accesses.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8573636e
    • S
      ring-buffer: Put back the length if crossed page with add_timestamp · bd1b7cd3
      Steven Rostedt (Red Hat) 提交于
      Commit fcc742ea "ring-buffer: Add event descriptor to simplify passing
      data" added a descriptor that holds various data instead of passing around
      several variables through parameters. The problem was that one of the
      parameters was modified in a function and the code was designed not to have
      an effect on that modified  parameter. Now that the parameter is a
      descriptor and any modifications to it are non-volatile, the size of the
      data could be unnecessarily expanded.
      
      Remove the extra space added if a timestamp was added and the event went
      across the page.
      
      Cc: stable@vger.kernel.org # 4.3+
      Fixes: fcc742ea "ring-buffer: Add event descriptor to simplify passing data"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bd1b7cd3
    • S
      ring-buffer: Update read stamp with first real commit on page · b81f472a
      Steven Rostedt (Red Hat) 提交于
      Do not update the read stamp after swapping out the reader page from the
      write buffer. If the reader page is swapped out of the buffer before an
      event is written to it, then the read_stamp may get an out of date
      timestamp, as the page timestamp is updated on the first commit to that
      page.
      
      rb_get_reader_page() only returns a page if it has an event on it, otherwise
      it will return NULL. At that point, check if the page being returned has
      events and has not been read yet. Then at that point update the read_stamp
      to match the time stamp of the reader page.
      
      Cc: stable@vger.kernel.org # 2.6.30+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b81f472a
  12. 23 11月, 2015 1 次提交
    • P
      treewide: Remove old email address · 90eec103
      Peter Zijlstra 提交于
      There were still a number of references to my old Red Hat email
      address in the kernel source. Remove these while keeping the
      Red Hat copyright notices intact.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      90eec103
  13. 11 11月, 2015 1 次提交
    • S
      bpf_trace: Make dependent on PERF_EVENTS · a31d82d8
      Steven Rostedt 提交于
      Arnd Bergmann reported:
      
        In my ARM randconfig tests, I'm getting a build error for
        newly added code in bpf_perf_event_read and bpf_perf_event_output
        whenever CONFIG_PERF_EVENTS is disabled:
      
        kernel/trace/bpf_trace.c: In function 'bpf_perf_event_read':
        kernel/trace/bpf_trace.c:203:11: error: 'struct perf_event' has no member named 'oncpu'
        if (event->oncpu != smp_processor_id() ||
                 ^
        kernel/trace/bpf_trace.c:204:11: error: 'struct perf_event' has no member named 'pmu'
              event->pmu->count)
      
        This can happen when UPROBE_EVENT is enabled but KPROBE_EVENT
        is disabled. I'm not sure if that is a configuration we care
        about, otherwise we could prevent this case from occuring by
        adding Kconfig dependencies.
      
      Looking at this further, it's really that UPROBE_EVENT enables PERF_EVENTS.
      By just having BPF_EVENTS depend on PERF_EVENTS, then all is fine.
      
      Link: http://lkml.kernel.org/r/4525348.Aq9YoXkChv@wuerfelReported-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a31d82d8
  14. 10 11月, 2015 1 次提交
  15. 08 11月, 2015 1 次提交
  16. 06 11月, 2015 1 次提交
  17. 04 11月, 2015 8 次提交
    • S
      tracing: Put back comma for empty fields in boot string parsing · 43ed3843
      Steven Rostedt (Red Hat) 提交于
      Both early_enable_events() and apply_trace_boot_options() parse a boot
      string that may get parsed later on. They both use strsep() which converts a
      comma into a nul character. To still allow the boot string to be parsed
      again the same way, the nul character gets converted back to a comma after
      the token is processed.
      
      The problem is that these two functions check for an empty parameter (two
      commas in a row ",,"), and continue the loop if the parameter is empty, but
      fails to place the comma back. In this case, the second parsing will end at
      this blank field, and not process fields afterward.
      
      In most cases, users should not have an empty field, but if its going to be
      checked, the code might as well be correct.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43ed3843
    • J
      tracing: Apply tracer specific options from kernel command line. · a4d1e688
      Jiaxing Wang 提交于
      Currently, the trace_options parameter is only applied in
      tracer_alloc_buffers() when global_trace.current_trace is nop_trace,
      so a tracer specific option will not be applied even when the specific
      tracer is also enabled from kernel command line. For example, the
      'func_stack_trace' option can't be enabled with the following kernel
      parameter:
      
        ftrace=function ftrace_filter=kfree trace_options=func_stack_trace
      
      We can enable tracer specific options by simply apply the options again
      if the specific tracer is also supplied from command line and started
      in register_tracer().
      
      To make trace_boot_options_buf can be parsed again, a comma and a space
      is put back if they were replaced by strsep and strstrip respectively.
      
      Also make register_tracer() be __init to access the __init data, and
      in fact register_tracer is only called from __init code.
      
      Link: http://lkml.kernel.org/r/1446599669-9294-1-git-send-email-hello.wjx@gmail.comSigned-off-by: NJiaxing Wang <hello.wjx@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a4d1e688
    • S
      ring_buffer: Remove unneeded smp_wmb() before wakeup of reader benchmark · 54ed1444
      Steven Rostedt (Red Hat) 提交于
      wake_up_process() has a memory barrier before doing anything, thus adding a
      memory barrier before calling it is redundant.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      54ed1444
    • S
      tracing: Allow dumping traces without tracking trace started cpus · 919cd979
      Sasha Levin 提交于
      We don't init iter->started when dumping the ftrace buffer, and there's no
      real need to do so - so allow skipping that check if the iter doesn't have
      an initialized ->started cpumask.
      
      Link: http://lkml.kernel.org/r/1441385156-27279-1-git-send-email-sasha.levin@oracle.comSigned-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      919cd979
    • P
      ring_buffer: Fix more races when terminating the producer in the benchmark · f47cb66d
      Petr Mladek 提交于
      The commit b44754d8 ("ring_buffer: Allow to exit the ring
      buffer benchmark immediately") added a hack into ring_buffer_producer()
      that set @kill_test when kthread_should_stop() returned true. It improved
      the situation a lot. It stopped the kthread in most cases because
      the producer spent most of the time in the patched while cycle.
      
      But there are still few possible races when kthread_should_stop()
      is set outside of the cycle. Then we do not set @kill_test and
      some other checks pass.
      
      This patch adds a better fix. It renames @test_kill/TEST_KILL() into
      a better descriptive @test_error/TEST_ERROR(). Also it introduces
      break_test() function that checks for both @test_error and
      kthread_should_stop().
      
      The new function is used in the producer when the check for @test_error
      is not enough. It is not used in the consumer because its state
      is manipulated by the producer via the "reader_finish" variable.
      
      Also we add a missing check into ring_buffer_producer_thread()
      between setting TASK_INTERRUPTIBLE and calling schedule_timeout().
      Otherwise, we might miss a wakeup from kthread_stop().
      
      Link: http://lkml.kernel.org/r/1441629518-32712-3-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f47cb66d
    • P
      ring_buffer: Do no not complete benchmark reader too early · 8b46ff69
      Petr Mladek 提交于
      It seems that complete(&read_done) might be called too early
      in some situations.
      
      1st scenario:
      -------------
      
      CPU0					CPU1
      
      ring_buffer_producer_thread()
        wake_up_process(consumer);
        wait_for_completion(&read_start);
      
      					ring_buffer_consumer_thread()
      					  complete(&read_start);
      
        ring_buffer_producer()
          # producing data in
          # the do-while cycle
      
      					  ring_buffer_consumer();
      					    # reading data
      					    # got error
      					    # set kill_test = 1;
      					    set_current_state(
      						TASK_INTERRUPTIBLE);
      					    if (reader_finish)  # false
      					    schedule();
      
          # producer still in the middle of
          # do-while cycle
          if (consumer && !(cnt % wakeup_interval))
            wake_up_process(consumer);
      
      					    # spurious wakeup
      					    while (!reader_finish &&
      						   !kill_test)
      					    # leaving because
      					    # kill_test == 1
      					    reader_finish = 0;
      					    complete(&read_done);
      
      1st BANG: We might access uninitialized "read_done" if this is the
      	  the first round.
      
          # producer finally leaving
          # the do-while cycle because kill_test == 1;
      
          if (consumer) {
            reader_finish = 1;
            wake_up_process(consumer);
            wait_for_completion(&read_done);
      
      2nd BANG: This will never complete because consumer already did
      	  the completion.
      
      2nd scenario:
      -------------
      
      CPU0					CPU1
      
      ring_buffer_producer_thread()
        wake_up_process(consumer);
        wait_for_completion(&read_start);
      
      					ring_buffer_consumer_thread()
      					  complete(&read_start);
      
        ring_buffer_producer()
          # CPU3 removes the module	  <--- difference from
          # and stops producer          <--- the 1st scenario
          if (kthread_should_stop())
            kill_test = 1;
      
      					  ring_buffer_consumer();
      					    while (!reader_finish &&
      						   !kill_test)
      					    # kill_test == 1 => we never go
      					    # into the top level while()
      					    reader_finish = 0;
      					    complete(&read_done);
      
          # producer still in the middle of
          # do-while cycle
          if (consumer && !(cnt % wakeup_interval))
            wake_up_process(consumer);
      
      					    # spurious wakeup
      					    while (!reader_finish &&
      						   !kill_test)
      					    # leaving because kill_test == 1
      					    reader_finish = 0;
      					    complete(&read_done);
      
      BANG: We are in the same "bang" situations as in the 1st scenario.
      
      Root of the problem:
      --------------------
      
      ring_buffer_consumer() must complete "read_done" only when "reader_finish"
      variable is set. It must not be skipped due to other conditions.
      
      Note that we still must keep the check for "reader_finish" in a loop
      because there might be spurious wakeups as described in the
      above scenarios.
      
      Solution:
      ----------
      
      The top level cycle in ring_buffer_consumer() will finish only when
      "reader_finish" is set. The data will be read in "while-do" cycle
      so that they are not read after an error (kill_test == 1)
      or a spurious wake up.
      
      In addition, "reader_finish" is manipulated by the producer thread.
      Therefore we add READ_ONCE() to make sure that the fresh value is
      read in each cycle. Also we add the corresponding barrier
      to synchronize the sleep check.
      
      Next we set the state back to TASK_RUNNING for the situation where we
      did not sleep.
      
      Just from paranoid reasons, we initialize both completions statically.
      This is safer, in case there are other races that we are unaware of.
      
      As a side effect we could remove the memory barrier from
      ring_buffer_producer_thread(). IMHO, this was the reason for
      the barrier. ring_buffer_reset() uses spin locks that should
      provide the needed memory barrier for using the buffer.
      
      Link: http://lkml.kernel.org/r/1441629518-32712-2-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8b46ff69
    • D
      tracing: Remove redundant TP_ARGS redefining · fb8c2293
      Dmitry Safonov 提交于
      TP_ARGS is not used anywhere in trace.h nor trace_entries.h
      Firstly, I left just #undef TP_ARGS and had no errors - remove it.
      
      Link: http://lkml.kernel.org/r/1446576560-14085-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fb8c2293
    • S
      tracing: Rename max_stack_lock to stack_trace_max_lock · d332736d
      Steven Rostedt (Red Hat) 提交于
      Now that max_stack_lock is a global variable, it requires a naming
      convention that is unlikely to collide. Rename it to the same naming
      convention that the other stack_trace variables have.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d332736d