- 01 11月, 2014 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
The current method of handling multiple function callbacks is to register a list function callback that calls all the other callbacks based on their hash tables and compare it to the function that the callback was called on. But this is very inefficient. For example, if you are tracing all functions in the kernel and then add a kprobe to a function such that the kprobe uses ftrace, the mcount trampoline will switch from calling the function trace callback to calling the list callback that will iterate over all registered ftrace_ops (in this case, the function tracer and the kprobes callback). That means for every function being traced it checks the hash of the ftrace_ops for function tracing and kprobes, even though the kprobes is only set at a single function. The kprobes ftrace_ops is checked for every function being traced! Instead of calling the list function for functions that are only being traced by a single callback, we can call a dynamically allocated trampoline that calls the callback directly. The function graph tracer already uses a direct call trampoline when it is being traced by itself but it is not dynamically allocated. It's trampoline is static in the kernel core. The infrastructure that called the function graph trampoline can also be used to call a dynamically allocated one. For now, only ftrace_ops that are not dynamically allocated can have a trampoline. That is, users such as function tracer or stack tracer. kprobes and perf allocate their ftrace_ops, and until there's a safe way to free the trampoline, it can not be used. The dynamically allocated ftrace_ops may, although, use the trampoline if the kernel is not compiled with CONFIG_PREEMPT. But that will come later. Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Tested-by: NJiri Kosina <jkosina@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 7月, 2014 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Link: http://lkml.kernel.org/r/53C54D18.3020602@zytor.comAcked-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 04 6月, 2014 1 次提交
-
-
由 Petr Mladek 提交于
I just went over this when looking at some Xen-related ftrace initialization problems. They were related to Xen code that is not upstream but this clean up would make sense here. I think that this was already the intention when text_ip_addr() was introduced in the commit 87fbb2ac (ftrace/x86: Use breakpoints for converting function graph caller). Anyway, better do it now before it shots people into their leg ;-) Link: http://lkml.kernel.org/p/1401812601-2359-1-git-send-email-pmladek@suse.czSigned-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 14 5月, 2014 3 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
As the decision to what needs to be done (converting a call to the ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs to ftrace_caller) can easily be determined from the rec->flags of FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes more complexity than is required. Remove it. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Move and rename get_ftrace_addr() and get_ftrace_addr_old() to ftrace_get_addr_new() and ftrace_get_addr_curr() respectively. This moves these two helper functions in the generic code out from the arch specific code, and renames them to have a better generic name. This will allow other archs to use them as well as makes it a bit easier to work on getting separate trampolines for different functions. ftrace_get_addr_new() returns the trampoline address that the mcount call address will be converted to. ftrace_get_addr_curr() returns the trampoline address of what the mcount call address currently jumps to. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
The add_breakpoint() code in the ftrace updating gets the address of what the call will become, but if the mcount address is changing from regs to non-regs ftrace_caller or vice versa, it will use what the record currently is. This is rather silly as the code should always use what is currently there regardless of if it's changing the regs function or just converting to a nop. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 22 4月, 2014 1 次提交
-
-
由 Petr Mladek 提交于
The colon at the end of the printk message suggests that it should get printed before the details printed by ftrace_bug(). When touching the line, let's use the preferred pr_warn() macro as suggested by checkpatch.pl. Link: http://lkml.kernel.org/r/1392650573-3390-5-git-send-email-pmladek@suse.czSigned-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 07 3月, 2014 4 次提交
-
-
由 Petr Mladek 提交于
Ftrace modifies function calls using Int3 breakpoints on x86. The breakpoints are handled only when the patching is in progress. If something goes wrong, there is a recovery code that removes the breakpoints. If this fails, the system might get silently rebooted when a remaining break is not handled or an invalid instruction is proceed. We should BUG() when the breakpoint could not be removed. Otherwise, the system silently crashes when the function finishes the Int3 handler is disabled. Note that we need to modify remove_breakpoint() to return non-zero value only when there is an error. The return value was ignored before, so it does not cause any troubles. Link: http://lkml.kernel.org/r/1393258342-29978-4-git-send-email-pmladek@suse.czSigned-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiri Slaby 提交于
As the data parameter is not really used by any ftrace_dyn_arch_init, remove that from ftrace_dyn_arch_init. This also removes the addr local variable from ftrace_init which is now unused. Note the documentation was imprecise as it did not suggest to set (*data) to 0. Link: http://lkml.kernel.org/r/1393268401-24379-4-git-send-email-jslaby@suse.cz Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jiri Slaby 提交于
No architecture uses the "data" parameter in ftrace_dyn_arch_init() in any way, it just sets the value to 0. And this is used as a return value in the caller -- ftrace_init, which just checks the retval against zero. Note there is also "return 0" in every ftrace_dyn_arch_init. So it is enough to check the retval and remove all the indirect sets of data on all archs. Link: http://lkml.kernel.org/r/1393268401-24379-3-git-send-email-jslaby@suse.cz Cc: linux-arch@vger.kernel.org Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Having ftrace_write() return -EPERM on failure, as that's what the callers return, then we can clean up the code a bit. That is, instead of: if (ftrace_write(...)) return -EPERM; return 0; or if (ftrace_write(...)) { ret = -EPERM; goto_out; } We can instead have: return ftrace_write(...); or ret = ftrace_write(...); if (ret) goto out; Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 04 3月, 2014 2 次提交
-
-
由 Petr Mladek 提交于
If a failure occurs while modifying ftrace function, it bails out and will remove the tracepoints to be back to what the code originally was. There is missing the final sync run across the CPUs after the fix up is done and before the ftrace int3 handler flag is reset. Here's the description of the problem: CPU0 CPU1 ---- ---- remove_breakpoint(); modifying_ftrace_code = 0; [still sees breakpoint] <takes trap> [sees modifying_ftrace_code as zero] [no breakpoint handler] [goto failed case] [trap exception - kernel breakpoint, no handler] BUG() Link: http://lkml.kernel.org/r/1393258342-29978-2-git-send-email-pmladek@suse.cz Fixes: 8a4d0a68 "ftrace: Use breakpoint method to update ftrace caller" Acked-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NPetr Mladek <pmladek@suse.cz> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
If a failure occurs while enabling a trace, it bails out and will remove the tracepoints to be back to what the code originally was. But the fix up had some bugs in it. By injecting a failure in the code, the fix up ran to completion, but shortly afterward the system rebooted. There was two bugs here. The first was that there was no final sync run across the CPUs after the fix up was done, and before the ftrace int3 handler flag was reset. That means that other CPUs could still see the breakpoint and trigger on it long after the flag was cleared, and the int3 handler would think it was a spurious interrupt. Worse yet, the int3 handler could hit other breakpoints because the ftrace int3 handler flag would have prevented the int3 handler from going further. Here's a description of the issue: CPU0 CPU1 ---- ---- remove_breakpoint(); modifying_ftrace_code = 0; [still sees breakpoint] <takes trap> [sees modifying_ftrace_code as zero] [no breakpoint handler] [goto failed case] [trap exception - kernel breakpoint, no handler] BUG() The second bug was that the removal of the breakpoints required the "within()" logic updates instead of accessing the ip address directly. As the kernel text is mapped read-only when CONFIG_DEBUG_RODATA is set, and the removal of the breakpoint is a modification of the kernel text. The ftrace_write() includes the "within()" logic, where as, the probe_kernel_write() does not. This prevented the breakpoint from being removed at all. Link: http://lkml.kernel.org/r/1392650573-3390-1-git-send-email-pmladek@suse.czReported-by: NPetr Mladek <pmladek@suse.cz> Tested-by: NPetr Mladek <pmladek@suse.cz> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 12 2月, 2014 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
When the conversion was made to remove stop machine and use the breakpoint logic instead, the modification of the function graph caller is still done directly as though it was being done under stop machine. As it is not converted via stop machine anymore, there is a possibility that the code could be layed across cache lines and if another CPU is accessing that function graph call when it is being updated, it could cause a General Protection Fault. Convert the update of the function graph caller to use the breakpoint method as well. Cc: H. Peter Anvin <hpa@zytor.com> Cc: stable@vger.kernel.org # 3.5+ Fixes: 08d636b6 "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine" Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 06 11月, 2013 1 次提交
-
-
由 Kevin Hao 提交于
In commit 8a4d0a68 "ftrace: Use breakpoint method to update ftrace caller", we choose to use breakpoint method to update the ftrace caller. But we also need to skip over the breakpoint in function ftrace_int3_handler() for them. Otherwise weird things would happen. Cc: stable@vger.kernel.org # 3.5+ Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 11月, 2012 1 次提交
-
-
由 Alexander Duyck 提交于
Instead of using __pa which is meant to be a general function for converting virtual addresses to physical addresses we can use __pa_symbol which is the preferred way of decoding kernel text virtual addresses to physical addresses. In this case we are not directly converting C visible symbols however if we know that the instruction pointer is somewhere between _text and _etext we know that we are going to be translating an address form the kernel text space. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Link: http://lkml.kernel.org/r/20121116215718.8521.24026.stgit@ahduyck-cp1.jf.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 20 7月, 2012 2 次提交
-
-
由 Steven Rostedt 提交于
Add saving full regs for function tracing on i386. The saving of regs was influenced by patches sent out by Masami Hiramatsu. Link: Link: http://lkml.kernel.org/r/20120711195745.379060003@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
Add a way to have different functions calling different trampolines. If a ftrace_ops wants regs saved on the return, then have only the functions with ops registered to save regs. Functions registered by other ops would not be affected, unless the functions overlap. If one ftrace_ops registered functions A, B and C and another ops registered fucntions to save regs on A, and D, then only functions A and D would be saving regs. Function B and C would work as normal. Although A is registered by both ops: normal and saves regs; this is fine as saving the regs is needed to satisfy one of the ops that calls it but the regs are ignored by the other ops function. x86_64 implements the full regs saving, and i386 just passes a NULL for regs to satisfy the ftrace_ops passing. Where an arch must supply both regs and ftrace_ops parameters, even if regs is just NULL. It is OK for an arch to pass NULL regs. All function trace users that require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when registering the ftrace_ops. If the arch does not support saving regs then the ftrace_ops will fail to register. The flag FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the ftrace_ops from failing to register. In this case, the handler may either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS. If the arch supports passing regs it will set this macro and pass regs for ops that request them. All other archs will just pass NULL. Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org Cc: Alexander van Heukelum <heukelum@fastmail.fm> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 01 6月, 2012 2 次提交
-
-
由 Steven Rostedt 提交于
On boot up and module load, it is fine to modify the code directly, without the use of breakpoints. This is because boot up modification is done before SMP is initialized, thus the modification is serial, and module load is done before the module executes. But after that we must use a SMP safe method to modify running code. Otherwise, if we are running the function tracer and update its function (by starting off the stack tracer, or perf tracing) the change of the function called by the ftrace trampoline is done directly. If this is being executed on another CPU, that CPU may take a GPF and crash the kernel. The breakpoint method is used to change the nops at all the functions, but the change of the ftrace callback handler itself was still using a direct modification. If tracing was enabled and the function callback was changed then another CPU could fault if it was currently calling the original callback. This modification must use the breakpoint method too. Note, the direct method is still used for boot up and module load. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
When the function tracer starts modifying the code via breakpoints it sets a variable (modifying_ftrace_code) to inform the breakpoint handler to call the ftrace int3 code. But there's no synchronization between setting this code and the handler, thus it is possible for the handler to be called on another CPU before it sees the variable. This will cause a kernel crash as the int3 handler will not know what to do with it. I originally added smp_mb()'s to force the visibility of the variable but H. Peter Anvin suggested that I just make it atomic. [ Added comments as suggested by Peter Zijlstra ] Suggested-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 5月, 2012 1 次提交
-
-
由 Steven Rostedt 提交于
To remove duplicate code, have the ftrace arch_ftrace_update_code() use the generic ftrace_modify_all_code(). This requires that the default ftrace_replace_code() becomes a weak function so that an arch may override it. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 04 5月, 2012 1 次提交
-
-
由 Steven Rostedt 提交于
If CONFIG_KPROBES is not set, then linux/kprobes.h will not include asm/kprobes.h needed by x86/ftrace.c for the BREAKPOINT macro. The x86/ftrace.c file should just include asm/kprobes.h as it does not need the rest of kprobes. Reported-by: NIngo Molnar <mingo@elte.hu> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 28 4月, 2012 2 次提交
-
-
由 Steven Rostedt 提交于
As ftrace function tracing would require modifying code that could be executed in NMI context, which is not stopped with stop_machine(), ftrace had to do a complex algorithm with various stages of setup and memory barriers to make it work. With the new breakpoint method, this is no longer required. The changes to the code can be done without any problem in NMI context, as well as without stop machine altogether. Remove the complex code as it is no longer needed. Also, a lot of the notrace annotations could be removed from the NMI code as it is now safe to trace them. With the exception of do_nmi itself, which does some special work to handle running in the debug stack. The breakpoint method can cause NMIs to double nest the debug stack if it's not setup properly, and that is done in do_nmi(), thus that function must not be traced. (Note the arch sh may want to do the same) Cc: Paul Mundt <lethal@linux-sh.org> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt 提交于
This method changes x86 to add a breakpoint to the mcount locations instead of calling stop machine. Now that iret can be handled by NMIs, we perform the following to update code: 1) Add a breakpoint to all locations that will be modified 2) Sync all cores 3) Update all locations to be either a nop or call (except breakpoint op) 4) Sync all cores 5) Remove the breakpoint with the new code. 6) Sync all cores [ Added updates that Masami suggested: Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient. Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier. ] Cc: H. Peter Anvin <hpa@zytor.com> Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 26 5月, 2011 1 次提交
-
-
由 Rakib Mullick 提交于
Due to commit dc326fca (x86, cpu: Clean up and unify the NOP selection infrastructure), we get the following warning: arch/x86/kernel/ftrace.c: In function ‘ftrace_make_nop’: arch/x86/kernel/ftrace.c:308:6: warning: assignment discards qualifiers from pointer target type arch/x86/kernel/ftrace.c: In function ‘ftrace_make_call’: arch/x86/kernel/ftrace.c:318:6: warning: assignment discards qualifiers from pointer target type ftrace_nop_replace() now returns const unsigned char *, so change its associated function/variable to its compatible type to keep compiler clam. Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com> Link: http://lkml.kernel.org/r/1305221620.7986.4.camel@localhost.localdomain [ updated for change of const void *src in probe_kernel_write() ] Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 19 4月, 2011 1 次提交
-
-
由 H. Peter Anvin 提交于
Clean up and unify the NOP selection infrastructure: - Make the atomic 5-byte NOP a part of the selection system. - Pick NOPs once during early boot and then be done with it. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Tejun Heo <tj@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Baron <jbaron@redhat.com> Link: http://lkml.kernel.org/r/1303166160-10315-3-git-send-email-hpa@linux.intel.com
-
- 10 3月, 2011 1 次提交
-
-
由 Steven Rostedt 提交于
Currently the index to the ret_stack is updated and the real return address is saved in the ret_stack. Then we call the trace function. The trace function could decide that it doesn't want to trace this function (ex. set_graph_function does not match) and it will return 0 which means not to trace this call. The normal function graph tracer has this code: if (!(trace->depth || ftrace_graph_addr(trace->func)) || ftrace_graph_ignore_irqs()) return 0; What this states is, if the trace depth (which is curr_ret_stack) is zero (top of nested functions) then test if we want to trace this function. If this function is not to be traced, then return 0 and the rest of the function graph tracer logic will not trace this function. The problem arises when an interrupt comes in after we updated the curr_ret_stack. The next function that gets called will have a trace->depth of 1. Which fools this trace code into thinking that we are in a nested function, and that we should trace. This causes interrupts to be traced when they should not be. The solution is to trace the function first and then update the ret_stack. Reported-by: Nzhiping zhong <xzhong86@163.com> Reported-by: Nwu zhangjin <wuzhangjin@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 30 12月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Go through x86 code and replace __get_cpu_var and get_cpu_var instances that refer to a scalar and are not used for address determinations. Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: NTejun Heo <tj@kernel.org> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 18 11月, 2010 1 次提交
-
-
由 matthieu castet 提交于
This patch is a logical extension of the protection provided by CONFIG_DEBUG_RODATA to LKMs. The protection is provided by splitting module_core and module_init into three logical parts each and setting appropriate page access permissions for each individual section: 1. Code: RO+X 2. RO data: RO+NX 3. RW data: RW+NX In order to achieve proper protection, layout_sections() have been modified to align each of the three parts mentioned above onto page boundary. Next, the corresponding page access permissions are set right before successful exit from load_module(). Further, free_module() and sys_init_module have been modified to set module_core and module_init as RW+NX right before calling module_free(). By default, the original section layout and access flags are preserved. When compiled with CONFIG_DEBUG_SET_MODULE_RONX=y, the patch will page-align each group of sections to ensure that each page contains only one type of content and will enforce RO/NX for each group of pages. -v1: Initial proof-of-concept patch. -v2: The patch have been re-written to reduce the number of #ifdefs and to make it architecture-agnostic. Code formatting has also been corrected. -v3: Opportunistic RO/NX protection is now unconditional. Section page-alignment is enabled when CONFIG_DEBUG_RODATA=y. -v4: Removed most macros and improved coding style. -v5: Changed page-alignment and RO/NX section size calculation -v6: Fixed comments. Restricted RO/NX enforcement to x86 only -v7: Introduced CONFIG_DEBUG_SET_MODULE_RONX, added calls to set_all_modules_text_rw() and set_all_modules_text_ro() in ftrace -v8: updated for compatibility with linux 2.6.33-rc5 -v9: coding style fixes -v10: more coding style fixes -v11: minor adjustments for -tip -v12: minor adjustments for v2.6.35-rc2-tip -v13: minor adjustments for v2.6.37-rc1-tip Signed-off-by: NSiarhei Liakh <sliakh.lkml@gmail.com> Signed-off-by: NXuxian Jiang <jiang@cs.ncsu.edu> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Reviewed-by: NJames Morris <jmorris@namei.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Andi Kleen <ak@muc.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Dave Jones <davej@redhat.com> Cc: Kees Cook <kees.cook@canonical.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <4CE2F914.9070106@free.fr> [ minor cleanliness edits, -v14: build failure fix ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 9月, 2010 1 次提交
-
-
由 Jason Baron 提交于
Move Steve's code for finding the best 5-byte no-op from ftrace.c to alternative.c. The idea is that other consumers (in this case jump label) want to make use of that code. Signed-off-by: NJason Baron <jbaron@redhat.com> LKML-Reference: <96259ae74172dcac99c0020c249743c523a92e18.1284733808.git.jbaron@redhat.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 25 2月, 2010 1 次提交
-
-
由 Steven Rostedt 提交于
The code in stop_machine that modifies the kernel text has a bit of logic to handle the case of NMIs. stop_machine does not prevent NMIs from executing, and if an NMI were to trigger on another CPU as the modifying CPU is changing the NMI text, a GPF could result. To prevent the GPF, the NMI calls ftrace_nmi_enter() which may modify the code first, then any other NMIs will just change the text to the same content which will do no harm. The code that stop_machine called must wait for NMIs to finish while it changes each location in the kernel. That code may also change the text to what the NMI changed it to. The key is that the text will never change content while another CPU is executing it. To make the above work, the call to ftrace_nmi_enter() must also do a smp_mb() as well as atomic_inc(). But for applications like perf that require a high number of NMIs for profiling, this can have a dramatic effect on the system. Not only is it doing a full memory barrier on both nmi_enter() as well as nmi_exit() it is also modifying a global variable with an atomic operation. This kills performance on large SMP machines. Since the memory barriers are only needed when ftrace is in the process of modifying the text (which is seldom), this patch adds a "modifying_code" variable that gets set before stop machine is executed and cleared afterwards. The NMIs will check this variable and store it in a per CPU "save_modifying_code" variable that it will use to check if it needs to do the memory barriers and atomic dec on NMI exit. Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 2月, 2010 1 次提交
-
-
由 Mike Frysinger 提交于
Most implementations of arch_syscall_addr() are the same, so create a default version in common code and move the one piece that differs (the syscall table) to asm/syscall.h. New arch ports don't have to waste time copying & pasting this simple function. The s390/sparc versions need to be different, so document why. Signed-off-by: NMike Frysinger <vapier@gentoo.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <1264498803-17278-1-git-send-email-vapier@gentoo.org> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 03 11月, 2009 1 次提交
-
-
由 Suresh Siddha 提交于
On x86_64, kernel text mappings are mapped read-only with CONFIG_DEBUG_RODATA. So use the kernel identity mapping instead of the kernel text mapping to modify the kernel text. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Tested-by: NSteven Rostedt <rostedt@goodmis.org> LKML-Reference: <20091029024821.080941108@sbs-t61.sc.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 10月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
Most of the syscalls metadata processing is done from arch. But these operations are mostly generic accross archs. Especially now that we have a common variable name that expresses the number of syscalls supported by an arch: NR_syscalls, the only remaining bits that need to reside in arch is the syscall nr to addr translation. v2: Compare syscalls symbols only after the "sys" prefix so that we avoid spurious mismatches with archs that have syscalls wrappers, in which case syscalls symbols have "SyS" prefixed aliases. (Reported by: Heiko Carstens) Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org>
-
- 12 10月, 2009 1 次提交
-
-
由 Joe Perches 提交于
- Remove prefixes from pr_<level>, use pr_fmt(fmt). No change in output. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <9b377eefae9e28c599dd4a17bdc81172965e9931.1254701151.git.joe@perches.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 8月, 2009 1 次提交
-
-
由 Jason Baron 提交于
Convert the syscalls event tracing code to use NR_syscalls, instead of FTRACE_SYSCALL_MAX. NR_syscalls is standard accross most arches, and reduces code confusion/complexity. Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jiaying Zhang <jiayingz@google.com> Cc: Martin Bligh <mbligh@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Josh Stone <jistone@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anwin <hpa@zytor.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> LKML-Reference: <9b4f1a84ecae57cc6599412772efa36f0d2b815b.1251146513.git.jbaron@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 12 8月, 2009 3 次提交
-
-
由 Jason Baron 提交于
The current state of syscalls tracepoints generates only one event id for every syscall events. This patch associates an id with each syscall trace event, so that we can identify each syscall trace event using the 'perf' tool. Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jiaying Zhang <jiayingz@google.com> Cc: Martin Bligh <mbligh@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Jason Baron 提交于
Call arch_init_ftrace_syscalls at boot, so we can determine early the set of syscalls for the syscall trace events. Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jiaying Zhang <jiayingz@google.com> Cc: Martin Bligh <mbligh@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Jason Baron 提交于
Add a new function to support translating a syscall name to number at runtime. This allows the syscall event tracer to map syscall names to number. Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jiaying Zhang <jiayingz@google.com> Cc: Martin Bligh <mbligh@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 06 8月, 2009 1 次提交
-
-
由 Frederic Weisbecker 提交于
The function graph tracer used to have a protection against NMI while entering a function entry tracing. But this is useless now, this tracer is reentrant and the ring buffer supports the NMI tracing. We can then drop this protection. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org>
-