- 24 11月, 2015 2 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
Commit fcc742ea "ring-buffer: Add event descriptor to simplify passing data" added a descriptor that holds various data instead of passing around several variables through parameters. The problem was that one of the parameters was modified in a function and the code was designed not to have an effect on that modified parameter. Now that the parameter is a descriptor and any modifications to it are non-volatile, the size of the data could be unnecessarily expanded. Remove the extra space added if a timestamp was added and the event went across the page. Cc: stable@vger.kernel.org # 4.3+ Fixes: fcc742ea "ring-buffer: Add event descriptor to simplify passing data" Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Do not update the read stamp after swapping out the reader page from the write buffer. If the reader page is swapped out of the buffer before an event is written to it, then the read_stamp may get an out of date timestamp, as the page timestamp is updated on the first commit to that page. rb_get_reader_page() only returns a page if it has an event on it, otherwise it will return NULL. At that point, check if the page being returned has events and has not been read yet. Then at that point update the read_stamp to match the time stamp of the reader page. Cc: stable@vger.kernel.org # 2.6.30+ Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 11 11月, 2015 1 次提交
-
-
由 Steven Rostedt 提交于
Arnd Bergmann reported: In my ARM randconfig tests, I'm getting a build error for newly added code in bpf_perf_event_read and bpf_perf_event_output whenever CONFIG_PERF_EVENTS is disabled: kernel/trace/bpf_trace.c: In function 'bpf_perf_event_read': kernel/trace/bpf_trace.c:203:11: error: 'struct perf_event' has no member named 'oncpu' if (event->oncpu != smp_processor_id() || ^ kernel/trace/bpf_trace.c:204:11: error: 'struct perf_event' has no member named 'pmu' event->pmu->count) This can happen when UPROBE_EVENT is enabled but KPROBE_EVENT is disabled. I'm not sure if that is a configuration we care about, otherwise we could prevent this case from occuring by adding Kconfig dependencies. Looking at this further, it's really that UPROBE_EVENT enables PERF_EVENTS. By just having BPF_EVENTS depend on PERF_EVENTS, then all is fine. Link: http://lkml.kernel.org/r/4525348.Aq9YoXkChv@wuerfelReported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 11月, 2015 1 次提交
-
-
由 Chen Gang 提交于
tracing_max_lat_fops is used only when TRACER_MAX_TRACE enabled, so also swith the related code. The related warning with defconfig under x86_64: CC kernel/trace/trace.o kernel/trace/trace.c:5466:37: warning: ‘tracing_max_lat_fops’ defined but not used [-Wunused-const-variable] static const struct file_operations tracing_max_lat_fops = { Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 08 11月, 2015 1 次提交
-
-
由 Dmitry Safonov 提交于
Since the ring buffer is lockless, there is no need to disable ftrace on CPU. And no one doing so: after commit 68179686 ("tracing: Remove ftrace_disable/enable_cpu()") ftrace_cpu_disabled stays the same after initialization, nothing changes it. ftrace_cpu_disabled shouldn't be used by any external module since it disables only function and graph_function tracers but not any other tracer. Link: http://lkml.kernel.org/r/1446836846-22239-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 06 11月, 2015 1 次提交
-
-
由 Jiaxing Wang 提交于
Currently tracing_init_dentry() returns -ENODEV when debugfs is not configured in, which causes tracefs not populated with tracing files and directories, so we will get an empty directory even after we manually mount tracefs. We can make tracing_init_dentry() return NULL if debugfs is not configured in and can manually mount tracefs. But return -ENODEV if debugfs is configured in but not initialized or failed to create automount point as that would break backward compatibility with older tools. Link: http://lkml.kernel.org/r/1446797056-11683-1-git-send-email-hello.wjx@gmail.comSigned-off-by: NJiaxing Wang <hello.wjx@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 04 11月, 2015 9 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
Both early_enable_events() and apply_trace_boot_options() parse a boot string that may get parsed later on. They both use strsep() which converts a comma into a nul character. To still allow the boot string to be parsed again the same way, the nul character gets converted back to a comma after the token is processed. The problem is that these two functions check for an empty parameter (two commas in a row ",,"), and continue the loop if the parameter is empty, but fails to place the comma back. In this case, the second parsing will end at this blank field, and not process fields afterward. In most cases, users should not have an empty field, but if its going to be checked, the code might as well be correct. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiaxing Wang 提交于
Currently, the trace_options parameter is only applied in tracer_alloc_buffers() when global_trace.current_trace is nop_trace, so a tracer specific option will not be applied even when the specific tracer is also enabled from kernel command line. For example, the 'func_stack_trace' option can't be enabled with the following kernel parameter: ftrace=function ftrace_filter=kfree trace_options=func_stack_trace We can enable tracer specific options by simply apply the options again if the specific tracer is also supplied from command line and started in register_tracer(). To make trace_boot_options_buf can be parsed again, a comma and a space is put back if they were replaced by strsep and strstrip respectively. Also make register_tracer() be __init to access the __init data, and in fact register_tracer is only called from __init code. Link: http://lkml.kernel.org/r/1446599669-9294-1-git-send-email-hello.wjx@gmail.comSigned-off-by: NJiaxing Wang <hello.wjx@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
wake_up_process() has a memory barrier before doing anything, thus adding a memory barrier before calling it is redundant. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Sasha Levin 提交于
We don't init iter->started when dumping the ftrace buffer, and there's no real need to do so - so allow skipping that check if the iter doesn't have an initialized ->started cpumask. Link: http://lkml.kernel.org/r/1441385156-27279-1-git-send-email-sasha.levin@oracle.comSigned-off-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Petr Mladek 提交于
The commit b44754d8 ("ring_buffer: Allow to exit the ring buffer benchmark immediately") added a hack into ring_buffer_producer() that set @kill_test when kthread_should_stop() returned true. It improved the situation a lot. It stopped the kthread in most cases because the producer spent most of the time in the patched while cycle. But there are still few possible races when kthread_should_stop() is set outside of the cycle. Then we do not set @kill_test and some other checks pass. This patch adds a better fix. It renames @test_kill/TEST_KILL() into a better descriptive @test_error/TEST_ERROR(). Also it introduces break_test() function that checks for both @test_error and kthread_should_stop(). The new function is used in the producer when the check for @test_error is not enough. It is not used in the consumer because its state is manipulated by the producer via the "reader_finish" variable. Also we add a missing check into ring_buffer_producer_thread() between setting TASK_INTERRUPTIBLE and calling schedule_timeout(). Otherwise, we might miss a wakeup from kthread_stop(). Link: http://lkml.kernel.org/r/1441629518-32712-3-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Petr Mladek 提交于
It seems that complete(&read_done) might be called too early in some situations. 1st scenario: ------------- CPU0 CPU1 ring_buffer_producer_thread() wake_up_process(consumer); wait_for_completion(&read_start); ring_buffer_consumer_thread() complete(&read_start); ring_buffer_producer() # producing data in # the do-while cycle ring_buffer_consumer(); # reading data # got error # set kill_test = 1; set_current_state( TASK_INTERRUPTIBLE); if (reader_finish) # false schedule(); # producer still in the middle of # do-while cycle if (consumer && !(cnt % wakeup_interval)) wake_up_process(consumer); # spurious wakeup while (!reader_finish && !kill_test) # leaving because # kill_test == 1 reader_finish = 0; complete(&read_done); 1st BANG: We might access uninitialized "read_done" if this is the the first round. # producer finally leaving # the do-while cycle because kill_test == 1; if (consumer) { reader_finish = 1; wake_up_process(consumer); wait_for_completion(&read_done); 2nd BANG: This will never complete because consumer already did the completion. 2nd scenario: ------------- CPU0 CPU1 ring_buffer_producer_thread() wake_up_process(consumer); wait_for_completion(&read_start); ring_buffer_consumer_thread() complete(&read_start); ring_buffer_producer() # CPU3 removes the module <--- difference from # and stops producer <--- the 1st scenario if (kthread_should_stop()) kill_test = 1; ring_buffer_consumer(); while (!reader_finish && !kill_test) # kill_test == 1 => we never go # into the top level while() reader_finish = 0; complete(&read_done); # producer still in the middle of # do-while cycle if (consumer && !(cnt % wakeup_interval)) wake_up_process(consumer); # spurious wakeup while (!reader_finish && !kill_test) # leaving because kill_test == 1 reader_finish = 0; complete(&read_done); BANG: We are in the same "bang" situations as in the 1st scenario. Root of the problem: -------------------- ring_buffer_consumer() must complete "read_done" only when "reader_finish" variable is set. It must not be skipped due to other conditions. Note that we still must keep the check for "reader_finish" in a loop because there might be spurious wakeups as described in the above scenarios. Solution: ---------- The top level cycle in ring_buffer_consumer() will finish only when "reader_finish" is set. The data will be read in "while-do" cycle so that they are not read after an error (kill_test == 1) or a spurious wake up. In addition, "reader_finish" is manipulated by the producer thread. Therefore we add READ_ONCE() to make sure that the fresh value is read in each cycle. Also we add the corresponding barrier to synchronize the sleep check. Next we set the state back to TASK_RUNNING for the situation where we did not sleep. Just from paranoid reasons, we initialize both completions statically. This is safer, in case there are other races that we are unaware of. As a side effect we could remove the memory barrier from ring_buffer_producer_thread(). IMHO, this was the reason for the barrier. ring_buffer_reset() uses spin locks that should provide the needed memory barrier for using the buffer. Link: http://lkml.kernel.org/r/1441629518-32712-2-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Dmitry Safonov 提交于
TP_ARGS is not used anywhere in trace.h nor trace_entries.h Firstly, I left just #undef TP_ARGS and had no errors - remove it. Link: http://lkml.kernel.org/r/1446576560-14085-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Now that max_stack_lock is a global variable, it requires a naming convention that is unlikely to collide. Rename it to the same naming convention that the other stack_trace variables have. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 AKASHI Takahiro 提交于
A stack frame may be used in a different way depending on cpu architecture. Thus it is not always appropriate to slurp the stack contents, as current check_stack() does, in order to calcurate a stack index (height) at a given function call. At least not on arm64. In addition, there is a possibility that we will mistakenly detect a stale stack frame which has not been overwritten. This patch makes check_stack() a weak function so as to later implement arch-specific version. Link: http://lkml.kernel.org/r/1446182741-31019-5-git-send-email-takahiro.akashi@linaro.orgSigned-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 03 11月, 2015 11 次提交
-
-
由 Yaowei Bai 提交于
Make ftrace_event_is_function() return bool to improve readability due to this particular function only using either one or zero as its return value. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-9-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
Make is_legal_op() return bool to improve readability due to this particular function only using either one or zero as its return value. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-8-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
Make rb_event_is_commit() return bool to improve readability due to this particular function only using either one or zero as its return value. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-7-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
Makes rb_per_cpu_empty() return bool to improve readability. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-6-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
Make ring_buffer_empty() and ring_buffer_empty_cpu() return bool. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-5-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
Make rb_is_reader_page() return bool to improve readability due to this particular function only using either true or false as its return value. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-4-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
This patch makes report_latency return bool due to this particular function only using either one or zero as its return value. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-3-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Yaowei Bai 提交于
This patch makes report_latency return bool to improve readability, indicating whether this new latency should be reported/recorded. No functional change. Link: http://lkml.kernel.org/r/1443537816-5788-2-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiaxing Wang 提交于
Update instancd_rmdir to use tracefs_remove_recursive instead of debugfs_remove_recursive.This was left in the transition from debugfs to tracefs. Link: http://lkml.kernel.org/r/1445169490-18315-2-git-send-email-hello.wjx@gmail.com Cc: stable@vger.kernel.org # 4.1+ Fixes: 8434dc93 ("tracing: Convert the tracing facility over to use tracefs") Signed-off-by: NJiaxing Wang <hello.wjx@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Chunyan Zhang 提交于
There's no need to record the time tracepoints take when tracing is off. This is because: 1) We cannot see these records since ring_buffer record is off at that moment. 2) If tracing is off and benchmark tracepoint is enabled, the time tracepoint takes is fewer than the same situation when tracing is on, since the tracepoints need to be wrote into ring_buffer, it would take more time. If turn on tracing at this moment, the average and standard deviation cannot exactly present the time that tracepoints take to write data into ring_buffer. Link: http://lkml.kernel.org/r/1445947933-27955-1-git-send-email-zhang.chunyan@linaro.orgSigned-off-by: NChunyan Zhang <zhang.chunyan@linaro.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
For the case where pids are already in set_event_pid, and one is added or removed then each CPU should be checked to make sure that the new or old pid is on or not on a CPU. For example: # echo 123 >> set_event_pid or # echo '!123' >> set_event_pid Link: http://lkml.kernel.org/r/20151030061643.GA19480@cacSuggested-by: NJiaxing Wang <hello.wjx@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 30 10月, 2015 1 次提交
-
-
由 Davidlohr Bueso 提交于
This is really about simplifying the double xchg patterns into a single cmpxchg, with the same logic. Other than the immediate cleanup, there are some subtleties this change deals with: (i) While the load of the old bt is fully ordered wrt everything, ie: old_bt = xchg(&q->blk_trace, bt); [barrier] if (old_bt) (void) xchg(&q->blk_trace, old_bt); [barrier] blk_trace could still be changed between the xchg and the old_bt load. Note that this description is merely theoretical and afaict very small, but doing everything in a single context with cmpxchg closes this potential race. (ii) Ordering guarantees are obviously kept with cmpxchg. (iii) Gets rid of the hacky-by-nature (void)xchg pattern. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> eviewed-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 27 10月, 2015 2 次提交
-
-
由 Alexei Starovoitov 提交于
exported perf symbols are GPL only, mark eBPF helper functions used in tracing as GPL only as well. Suggested-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexei Starovoitov 提交于
Fix safety checks for bpf_perf_event_read(): - only non-inherited events can be added to perf_event_array map (do this check statically at map insertion time) - dynamically check that event is local and !pmu->count Otherwise buggy bpf program can cause kernel splat. Also fix error path after perf_event_attrs() and remove redundant 'extern'. Fixes: 35578d79 ("bpf: Implement function bpf_perf_event_read() that get the selected hardware PMU conuter") Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Tested-by: NWang Nan <wangnan0@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 10月, 2015 4 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
p_start() and p_stop() are seq_file functions that match. Teach sparse to know that rcu_read_lock_sched() that is taken by p_start() is released by p_stop. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
My tests found that if a task is running but not filtered when set_event_pid is modified, then it can still be traced. Call on_each_cpu() to check if the current running task should be filtered and update the per cpu flags of tr->data appropriately. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Add the necessary hooks to use the pids loaded in set_event_pid to filter all the events enabled in the tracing instance that match the pids listed. Two probes are added to both sched_switch and sched_wakeup tracepoints to be called before other probes are called and after the other probes are called. The first is used to set the necessary flags to let the probes know to test if they should be traced or not. The sched_switch pre probe will set the "ignore_pid" flag if neither the previous or next task has a matching pid. The sched_switch probe will set the "ignore_pid" flag if the next task does not match the matching pid. The pre probe allows for probes tracing sched_switch to be traced if necessary. The sched_wakeup pre probe will set the "ignore_pid" flag if neither the current task nor the wakee task has a matching pid. The sched_wakeup post probe will set the "ignore_pid" flag if the current task does not have a matching pid. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Create a tracing directory called set_event_pid, which currently has no function, but will be used to filter all events for the tracing instance or the pids that are added to the file. The reason no functionality is added with this commit is that this commit focuses on the creation and removal of the pids in a safe manner. And tests can be made against this change to make sure things are correct before hooking features to the list of pids. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 22 10月, 2015 1 次提交
-
-
由 Alexei Starovoitov 提交于
This helper is used to send raw data from eBPF program into special PERF_TYPE_SOFTWARE/PERF_COUNT_SW_BPF_OUTPUT perf_event. User space needs to perf_event_open() it (either for one or all cpus) and store FD into perf_event_array (similar to bpf_perf_event_read() helper) before eBPF program can send data into it. Today the programs triggered by kprobe collect the data and either store it into the maps or print it via bpf_trace_printk() where latter is the debug facility and not suitable to stream the data. This new helper replaces such bpf_trace_printk() usage and allows programs to have dedicated channel into user space for post-processing of the raw data collected. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 10月, 2015 5 次提交
-
-
由 Dmitry Safonov 提交于
Both start_branch_trace() and stop_branch_trace() are used in only one location, and are both static. As they are small functions there is no need to keep them separated out. Link: http://lkml.kernel.org/r/1445000689-32596-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Tal Shorer 提交于
Add a new options to trace Kconfig, CONFIG_TRACING_EVENTS_GPIO, that is used for enabling/disabling compilation of gpio function trace events. Link: http://lkml.kernel.org/r/1438432079-11704-4-git-send-email-tal.shorer@gmail.comAcked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NTal Shorer <tal.shorer@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
The code in stack tracer should not be executed within an NMI as it grabs spinlocks and stack tracing an NMI gives the possibility of causing a deadlock. Although this is safe on x86_64, because it does not perform stack traces when the task struct stack is not in use (interrupts and NMIs), it may be an issue for NMIs on i386 and other archs that use the same stack as the NMI. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Dmitry Safonov 提交于
Extend module command for function filter selection with globbing. It uses the same globbing as function filter. sh# echo '*alloc*:mod:*' > set_ftrace_filter Will trace any function with the letters 'alloc' in the name in any module but not in kernel. sh# echo '!*alloc*:mod:ipv6' >> set_ftrace_filter Will prevent from tracing functions with 'alloc' in the name from module ipv6 (do not forget to append to set_ftrace_filter file). sh# echo '*alloc*:mod:!ipv6' > set_ftrace_filter Will trace functions with 'alloc' in the name from kernel and any module except ipv6. sh# echo '*alloc*:mod:!*' > set_ftrace_filter Will trace any function with the letters 'alloc' in the name only from kernel, but not from any module. sh# echo '*:mod:!*' > set_ftrace_filter or sh# echo ':mod:!' > set_ftrace_filter Will trace every function in the kernel, but will not trace functions from any module. sh# echo '*:mod:*' > set_ftrace_filter or sh# echo ':mod:' > set_ftrace_filter As the opposite will trace all functions from all modules, but not from kernel. sh# echo '*:mod:*snd*' > set_ftrace_filter Will trace your sound drivers only (if any). Link: http://lkml.kernel.org/r/1443545176-3215-4-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com> [ Made format changes ] Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Dmitry Safonov 提交于
ftrace_match parameters are very related and I reduce the number of local variables & parameters with it. This is also preparation for module globbing as it would introduce more realated variables & parameters. Link: http://lkml.kernel.org/r/1443545176-3215-3-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com> [ Made some formatting changes ] Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 20 10月, 2015 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
The stack tracer was triggering the WARN_ON() in module.c: static void module_assert_mutex_or_preempt(void) { #ifdef CONFIG_LOCKDEP if (unlikely(!debug_locks)) return; WARN_ON(!rcu_read_lock_sched_held() && !lockdep_is_held(&module_mutex)); #endif } The reason is that the stack tracer traces all function calls, and some of those calls happen while exiting or entering user space and idle. Some of these functions are called after RCU had already stopped watching, as RCU does not watch userspace or idle CPUs. If a max stack is hit, then the save_stack_trace() is called, which will check module addresses and call module_assert_mutex_or_preempt(), and then trigger the warning. Sad part is, the warning itself will also do a stack trace and tigger the same warning. That probably should be fixed. The warning was added by 0be964be "module: Sanitize RCU usage and locking" but this bug has probably been around longer. But it's unlikely to cause much harm, but the new warning causes the system to lock up. Cc: stable@vger.kernel.org # 4.2+ Cc: Peter Zijlstra <peterz@infradead.org> Cc:"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-