- 27 10月, 2008 1 次提交
-
-
由 Steven Rostedt 提交于
Impact: avoid section mismatch warning, clean up The dynamic ftrace determines which nop is safe to use at start up. When it finds a safe nop for patching, it sets a pointer called ftrace_nop to point to the code. All call sites are then patched to this nop. Later, when tracing is turned on, this ftrace_nop variable is again used to compare the location to make sure it is a nop before we update it to an mcount call. If this fails just once, a warning is printed and ftrace is disabled. Rakib Mullick noted that the code that sets up the nop is a .init section where as the nop itself is in the .text section. This is needed because the nop is used later on after boot up. The problem is that the test of the nop jumps back to the setup code and causes a "section mismatch" warning. Rakib first recommended to convert the nop to .init.text, but as stated above, this would fail since that text is used later. The real solution is to extend Rabik's patch, and to make the ftrace_nop into an array, and just save the code from the assembly to this array. Now the section can stay as an init section, and we have a nop to use later on. Reported-by: NRakib Mullick <rakib.mullick@gmail.com> Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 5 次提交
-
-
由 Steven Rostedt 提交于
The entire file of ftrace.c in the arch code needs to be marked as notrace. It is much cleaner to do this from the Makefile with CFLAGS_REMOVE_ftrace.o. [ powerpc already had this in its Makefile. ] Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
The arch dependent function ftrace_mcount_set was only used by the daemon start up code. This patch removes it. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Andrew Morton suggested using the proper API for reading and writing kernel areas that might fault. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Add comments to explain what is happening in the x86 arch ftrace code. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Have the ftrace_modify_code return error values: -EFAULT on error of reading the address -EINVAL if what is read does not match what it expected -EPERM if the write fails to update after a successful match. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 10月, 2008 1 次提交
-
-
由 Steven Rostedt 提交于
In ftrace, logic is defined in the WARN_ON_ONCE, which can become a nop with some configs. This patch fixes it. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 10月, 2008 6 次提交
-
-
由 Anders Kaseorg 提交于
Commit 4c3dc21b136f8cb4b72afee16c3ba7e961656c0b in tip introduced the 5-byte NOP ftrace_test_p6nop: jmp . + 5 .byte 0x00, 0x00, 0x00 This is not friendly to disassemblers because an odd number of 0x00s ends in the middle of an instruction boundary. This changes the 0x00s to 1-byte NOPs (0x90). Signed-off-by: NAnders Kaseorg <andersk@mit.edu> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frédéric Weisbecker 提交于
With latest -tip I get this bug: [ 49.439988] in_atomic():0, irqs_disabled():1 [ 49.440118] INFO: lockdep is turned off. [ 49.440118] Pid: 2814, comm: modprobe Tainted: G W 2.6.27-rc7 #4 [ 49.440118] [<c01215e1>] __might_sleep+0xe1/0x120 [ 49.440118] [<c01148ea>] ftrace_modify_code+0x2a/0xd0 [ 49.440118] [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa [ 49.440118] [<c016e80e>] __ftrace_update_code+0xfe/0x2f0 [ 49.440118] [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa [ 49.440118] [<c016f190>] ftrace_convert_nops+0x50/0x80 [ 49.440118] [<c016f1d6>] ftrace_init_module+0x16/0x20 [ 49.440118] [<c015498b>] load_module+0x185b/0x1d30 [ 49.440118] [<c01767a0>] ? find_get_page+0x0/0xf0 [ 49.440118] [<c02463c0>] ? sprintf+0x0/0x30 [ 49.440118] [<c034e012>] ? mutex_lock_interruptible_nested+0x1f2/0x350 [ 49.440118] [<c0154eb3>] sys_init_module+0x53/0x1b0 [ 49.440118] [<c0352340>] ? do_page_fault+0x0/0x740 [ 49.440118] [<c0104012>] syscall_call+0x7/0xb [ 49.440118] ======================= It is because ftrace_modify_code() calls copy_to_user and copy_from_user. These functions have been inserted after guessing that there couldn't be any race condition but copy_[to/from]_user might sleep and __ftrace_update_code is called with local_irq_saved. These function have been inserted since this commit: d5e92e8978fd2574e415dc2792c5eb592978243d: "ftrace: x86 use copy from user function" Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Harvey Harrison 提交于
Could just as easily change the three casts to cast to the correct type...this patch changes the type of ftrace_nop instead. Supresses sparse warnings: arch/x86/kernel/ftrace.c:157:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:157:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:157:14: got unsigned long *<noident> arch/x86/kernel/ftrace.c:161:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:161:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:161:14: got unsigned long *<noident> arch/x86/kernel/ftrace.c:165:14: warning: incorrect type in assignment (different signedness) arch/x86/kernel/ftrace.c:165:14: expected long *static [toplevel] ftrace_nop arch/x86/kernel/ftrace.c:165:14: got unsigned long *<noident> Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
The modification of code is performed either by kstop_machine, before SMP starts, or on module code before the module is executed. There is no reason to do the modifications from assembly. The copy to and from user functions are sufficient and produces cleaner and easier to read code. Thanks to Benjamin Herrenschmidt for suggesting the idea. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Mathieu Desnoyers revealed a bug in the original code. The nop that is used to relpace the mcount caller can be a two part nop. This runs the risk where a process can be preempted after executing the first nop, but before the second part of the nop. The ftrace code calls kstop_machine to keep multiple CPUs from executing code that is being modified, but it does not protect against a task preempting in the middle of a two part nop. If the above preemption happens and the tracer is enabled, after the kstop_machine runs, all those nops will be calls to the trace function. If the preempted process that was preempted between the two nops is executed again, it will execute half of the call to the trace function, and this might crash the system. This patch instead uses what both the latest Intel and AMD spec suggests. That is the P6_NOP5 sequence of "0x0f 0x1f 0x44 0x00 0x00". Note, some older CPUs and QEMU might fault on this nop, so this nop is executed with fault handling first. If it detects a fault, it will then use the code "0x66 0x66 0x66 0x66 0x90". If that faults, it will then default to a simple "jmp 1f; .byte 0x00 0x00 0x00; 1:". The jmp is not optimal but will do if the first two can not be executed. TODO: Examine the cpuid to determine the nop to use. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
x86 now sets up the mcount locations through the build and no longer needs to record the ip when the function is executed. This patch changes the initial mcount to simply return. There's no need to do any other work. If the ftrace start up test fails, the original mcount will be what everything will use, so having this as fast as possible is a good thing. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 6月, 2008 1 次提交
-
-
由 Abhishek Sagar 提交于
Record the address of the mcount call-site. Currently all archs except sparc64 record the address of the instruction following the mcount call-site. Some general cleanups are entailed. Storing mcount addresses in rec->ip enables looking them up in the kprobe hash table later on to check if they're kprobe'd. Signed-off-by: NAbhishek Sagar <sagar.abhishek@gmail.com> Cc: davem@davemloft.net Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 6月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
fix: arch/x86/kernel/ftrace.c: Assembler messages: arch/x86/kernel/ftrace.c:82: Error: bad register name `%sil' make[1]: *** [arch/x86/kernel/ftrace.o] Error 1 Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 6月, 2008 1 次提交
-
-
由 Abhishek Sagar 提交于
Remove the unneeded function ftrace_ip_converted(). Signed-off-by: NAbhishek Sagar <sagar.abhishek@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 5月, 2008 5 次提交
-
-
由 Steven Rostedt 提交于
The fault label to jump to on fault of updating the code was misplaced preventing the fault from being recorded. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Steven Rostedt 提交于
This patch replaces the indirect call to the mcount function pointer with a direct call that will be patched by the dynamic ftrace routines. On boot up, the mcount function calls the ftace_stub function. When the dynamic ftrace code is initialized, the ftrace_stub is replaced with a call to the ftrace_record_ip, which records the instruction pointers of the locations that call it. Later, the ftraced daemon will call kstop_machine and patch all the locations to nops. When a ftrace is enabled, the original calls to mcount will now be set top call ftrace_caller, which will do a direct call to the registered ftrace function. This direct call is also patched when the function that should be called is updated. All patching is performed by a kstop_machine routine to prevent any type of race conditions that is associated with modifying code on the fly. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Steven Rostedt 提交于
This patch moves the memory management of the ftrace records out of the arch code and into the generic code making the arch code simpler. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Steven Rostedt 提交于
This patch patches the call to mcount with nops instead of a jmp over the mcount call. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Steven Rostedt 提交于
This patch adds a feature to dynamically replace the ftrace code with the jmps to allow a kernel with ftrace configured to run as fast as it can without it configured. The way this works, is on bootup (if ftrace is enabled), a ftrace function is registered to record the instruction pointer of all places that call the function. Later, if there's still any code to patch, a kthread is awoken (rate limited to at most once a second) that performs a stop_machine, and replaces all the code that was called with a jmp over the call to ftrace. It only replaces what was found the previous time. Typically the system reaches equilibrium quickly after bootup and there's no code patching needed at all. e.g. call ftrace /* 5 bytes */ is replaced with jmp 3f /* jmp is 2 bytes and we jump 3 forward */ 3: When we want to enable ftrace for function tracing, the IP recording is removed, and stop_machine is called again to replace all the locations of that were recorded back to the call of ftrace. When it is disabled, we replace the code back to the jmp. Allocation is done by the kthread. If the ftrace recording function is called, and we don't have any record slots available, then we simply skip that call. Once a second a new page (if needed) is allocated for recording new ftrace function calls. A large batch is allocated at boot up to get most of the calls there. Because we do this via stop_machine, we don't have to worry about another CPU executing a ftrace call as we modify it. But we do need to worry about NMI's so all functions that might be called via nmi must be annotated with notrace_nmi. When this code is configured in, the NMI code will not call notrace. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-