- 14 10月, 2008 40 次提交
-
-
由 Frédéric Weisbecker 提交于
The tracing engine have now to be init in early_initcall to set the boot tracer. Only the debugfs settings will be initialized at fs_initcall time. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
Add the boot/initcall tracer. It's primary purpose is to be able to trace the initcalls. It is intended to be used with scripts/bootgraph.pl after some small improvements. Note that it is not active after its init. To avoid tracing (and so crashing) before the whole tracing engine init, you have to explicitly call start_boot_trace() after do_pre_smp_initcalls() to enable it. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arjan van de Ven 提交于
When optimizing the kernel boot time, it's very valuable to visualize what is going on at which time. In addition, with the fastboot asynchronous initcall level, it's very valuable to see which initcall gets run where and when. This patch adds a script to turn a dmesg into a SVG graph (that can be shown with tools such as InkScape, Gimp or Firefox) and a small change to the initcall code to print the PID of the thread calling the initcall (so that the script can work out the parallelism). Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
-
由 Lai Jiangshan 提交于
bit-field is not thread-safe nor smp-safe. struct marker_entry.rcu_pending is not protected by any lock in rcu-callback free_old_closure(). so we must turn it into a safe type. detail: I suppose rcu_pending and ptype are store in struct marker_entry.tmp1 free_old_closure() side: change ptype side: | load struct marker_entry.tmp1 --------------------------------|-------------------------------- | change ptype bit in tmp1 load struct marker_entry.tmp1 | change rcu_pending bit in tmp1 | store tmp1 | --------------------------------|-------------------------------- | store tmp1 now this result equals that free_old_closure() do not change rcu_pending bit, bug! This bug will cause redundant rcu_barrier_sched() called. not too harmful. ----- corresponding: free_old_closure() side: change ptype side: load struct marker_entry.tmp1 | --------------------------------|-------------------------------- | load struct marker_entry.tmp1 change rcu_pending bit in tmp1 | | change ptype bit in tmp1 | store tmp1 --------------------------------|-------------------------------- store tmp1 | now this result equals that change ptype side do not change ptype bit, bug! this bug cause marker_probe_cb() access to invalid memory. oops! see also: http://en.wikipedia.org/wiki/Bit_fieldSigned-off-by: NLai Jiangshan <laijs@cn.fujitsu.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Lai Jiangshan 提交于
when the second, third... probe is registered, its format is not checked, this patch fixes it. Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Turn marker synchronize unregister into a static inline. There is no reason to keep it as a macro over a static inline. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Lai Jiangshan discovered a reentrancy issue with markers and fixed it by adding synchronize_sched() calls at each registration/unregistraiton. It works, but it removes the ability to do batch registration/unregistration and can cause registration of ~100 markers to take about 30 seconds on a loaded machine (synchronize_sched() is much slower on such workloads). This patch implements a version of the fix which won't slow down marker batch registration/unregistration. It also go back to the original non-synchronized reg/unreg. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
We need a marker_synchronize_unregister() before the end of exit() to make sure every probe callers have exited the non preemptible section and thus are not executing the probe code anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Acked-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Document the need for a marker_synchronize_unregister() before the end of exit() to make sure every probe callers have exited the non preemptible section and thus are not executing the probe code anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Need a marker_synchronize_unregister() before the end of exit() to make sure every probe callers have exited the non preemptible section and thus are not executing the probe code anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Use the new rcu_read_lock_sched/unlock_sched() in marker code around the call site instead of preempt_disable/enable(). It helps reviewing the code more easily. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Create marker_synchronize_unregister() which must be called before the end of exit() to make sure every probe callers have exited the non preemptible section and thus are not executing the probe code anymore. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
The tracepoints had the same problem markers did have wrt reentrancy. Apply a similar fix using a rcu_barrier after each tracepoint mutex lock. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mathieu Desnoyers 提交于
Make tracepoints use rcu sched. (cleanup) Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Lai Jiangshan 提交于
unregister bug: codes using makers are typically calling marker_probe_unregister() and then destroying the data that marker_probe_func needs(or unloading this module). This is bug when the corresponding marker_probe_func is still running(on other cpus), it is using the destroying/ed data. we should call synchronize_sched() after marker_update_probes(). reenter bug: marker_probe_register(), marker_probe_unregister() and marker_probe_unregister_private_data() are not reentrant safe functions. these 3 functions release markers_mutex and then require it again and do "entry->oldptr = old; ...", but entry->oldptr maybe is using now for these 3 functions may reenter when markers_mutex is released. we use synchronize_sched() instead of call_rcu_sched() to fix this bug. actually we can do: " if (entry->rcu_pending) rcu_barrier_sched(); " after require markers_mutex again. but synchronize_sched() is better and simpler. For these 3 functions are not critical path. Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
With latest -tip I get this bug: [ 49.439988] in_atomic():0, irqs_disabled():1 [ 49.440118] INFO: lockdep is turned off. [ 49.440118] Pid: 2814, comm: modprobe Tainted: G W 2.6.27-rc7 #4 [ 49.440118] [<c01215e1>] __might_sleep+0xe1/0x120 [ 49.440118] [<c01148ea>] ftrace_modify_code+0x2a/0xd0 [ 49.440118] [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa [ 49.440118] [<c016e80e>] __ftrace_update_code+0xfe/0x2f0 [ 49.440118] [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa [ 49.440118] [<c016f190>] ftrace_convert_nops+0x50/0x80 [ 49.440118] [<c016f1d6>] ftrace_init_module+0x16/0x20 [ 49.440118] [<c015498b>] load_module+0x185b/0x1d30 [ 49.440118] [<c01767a0>] ? find_get_page+0x0/0xf0 [ 49.440118] [<c02463c0>] ? sprintf+0x0/0x30 [ 49.440118] [<c034e012>] ? mutex_lock_interruptible_nested+0x1f2/0x350 [ 49.440118] [<c0154eb3>] sys_init_module+0x53/0x1b0 [ 49.440118] [<c0352340>] ? do_page_fault+0x0/0x740 [ 49.440118] [<c0104012>] syscall_call+0x7/0xb [ 49.440118] ======================= It is because ftrace_modify_code() calls copy_to_user and copy_from_user. These functions have been inserted after guessing that there couldn't be any race condition but copy_[to/from]_user might sleep and __ftrace_update_code is called with local_irq_saved. These function have been inserted since this commit: d5e92e8978fd2574e415dc2792c5eb592978243d: "ftrace: x86 use copy from user function" Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Harvey Harrison 提交于
Could just as easily change the three casts to cast to the correct type...this patch changes the type of ftrace_nop instead. Supresses sparse warnings: arch/x86/kernel/ftrace.c:157:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:157:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:157:14: got unsigned long *<noident> arch/x86/kernel/ftrace.c:161:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:161:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:161:14: got unsigned long *<noident> arch/x86/kernel/ftrace.c:165:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:165:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:165:14: got unsigned long *<noident> Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
With the recent updates to ftrace, there should not be any failures when modifying the code. If there is, then we need to warn about it. This patch has a cleaned up version of the code that I used to discover that the weak symbols were causing failures. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
Replace "none" tracer by the recently created "nop" tracer. Both are pretty similar except that nop accepts TRACE_PRINT or TRACE_SPECIAL entries. And as a consequence, changing the size of the ring buffer now requires that tracing has already been disabled. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Steven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
Now that the nop tracer is used as the default tracer by replacing the "none" tracer, tracing engine depends on it. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Steven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
If nop tracer is selected, some old entries from the previous tracer could still be enqueued. Tracing have to be reset. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Steven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Noonan 提交于
The functions are already 'extern' anyway, so there's no problem with linkage. Removing these ifdefs also helps find any potential compiler errors. Suggested by Andrew Morton. Signed-off-by: NSteven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Noonan 提交于
When CONFIG_DYNAMIC_FTRACE isn't used, neither is mcount_addr. This patch eliminates that warning. Signed-off-by: NSteven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Noonan 提交于
A no-op tracer which can serve two purposes: 1. A template for development of a new tracer. 2. A convenient way to see ftrace_printk() calls without an irrelevant trace making the output messy. [ mingo@elte.hu: resolved conflicts ] Signed-off-by: NSteven Noonan <steven@uplinklabs.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Allow a user to inject a marker (TRACE_PRINT entry) into the trace ring buffer. The related file operations are derived from code by Frédéric Weisbecker <fweisbec@gmail.com>. Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Also make trace_seq_print_cont() non-static, and add a newline if the seq buffer can't hold all data. Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Offer mmiotrace users a function to inject markers from inside the kernel. This depends on the trace_vprintk() patch. Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
trace_vprintk() for easier implementation of tracer specific *_printk functions. Add check check for no_tracer, and implement __ftrace_printk() as a wrapper. Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Moves the mmiotrace specific functions from trace.c to trace_mmiotrace.c. Functions trace_wake_up(), tracing_get_trace_entry(), and tracing_generic_entry_update() are therefore made available outside trace.c. Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pekka Paalanen 提交于
Signed-off-by: NPekka Paalanen <pq@iki.fi> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
This must be brown paper bag week for Steven Rostedt! While working on ftrace for PPC, I discovered that the hash locking done when CONFIG_FTRACE_MCOUNT_RECORD is not set, is totally incorrect. With a cut and paste error, I had the hash lock macro to lock for both hash_lock _and_ hash_unlock! This bug did not affect x86 since this bug was introduced when CONFIG_FTRACE_MCOUNT_RECORD was added to x86. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
ftrace_release is necessary for all uses of dynamic ftrace and not just the archs that have CONFIG_FTRACE_MCOUNT_RECORD defined. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
make most of the tracers depend on DEBUG_KERNEL - that's their intended purpose. (most distributions have DEBUG_KERNEL enabled anyway so this is not a practical limitation - but it simplifies the tracing menu in the normal case) Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
While profiling the smp behaviour of the scheduler it was needed to know to which cpu a task got woken. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Currently ftrace_printk only works with the ftrace tracer, switch it to an iter_ctrl setting so we can make us of them with other tracers too. [rostedt@redhat.com: tweak to the disable condition] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
An item in the trace buffer that is bigger than one entry may be split up using the TRACE_CONT entry. This makes it a virtual single entry. The current code increments the iterator index even while traversing TRACE_CONT entries, making it look like the iterator is further than it actually is. This patch adds code to not increment the iterator index while skipping over TRACE_CONT entries. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Peter Zijlstra provided me with a nice brown paper bag while letting me know that I was doing a logical AND and not a binary one, making a condition true more often than it should be. Luckily, a false true is handled by the calling function and no harm is done. But this needs to be fixed regardless. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Michael Ellerman 提交于
Currently some of the ftrace output goes skewiff if you have more than 9 cpus, and some if you have more than 99. Twiddle with the headers and format strings to make up to 999 cpus display without causing spacing problems. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu>
-