1. 26 11月, 2008 1 次提交
  2. 23 11月, 2008 1 次提交
  3. 18 11月, 2008 1 次提交
    • F
      tracing/function-return-tracer: add the overrun field · 0231022c
      Frederic Weisbecker 提交于
      Impact: help to find the better depth of trace
      
      We decided to arbitrary define the depth of function return trace as
      "20". Perhaps this is not enough. To help finding an optimal depth, we
      measure now the overrun: the number of functions that have been missed
      for the current thread. By default this is not displayed, we have to
      do set a particular flag on the return tracer: echo overrun >
      /debug/tracing/trace_options And the overrun will be printed on the
      right.
      
      As the trace shows below, the current 20 depth is not enough.
      
      update_wall_time+0x37f/0x8c0 -> update_xtime_cache (345 ns) (Overruns: 2838)
      update_wall_time+0x384/0x8c0 -> clocksource_get_next (1141 ns) (Overruns: 2838)
      do_timer+0x23/0x100 -> update_wall_time (3882 ns) (Overruns: 2838)
      tick_do_update_jiffies64+0xbf/0x160 -> do_timer (5339 ns) (Overruns: 2838)
      tick_sched_timer+0x6a/0xf0 -> tick_do_update_jiffies64 (7209 ns) (Overruns: 2838)
      vgacon_set_cursor_size+0x98/0x120 -> native_io_delay (2613 ns) (Overruns: 274)
      vgacon_cursor+0x16e/0x1d0 -> vgacon_set_cursor_size (33151 ns) (Overruns: 274)
      set_cursor+0x5f/0x80 -> vgacon_cursor (36432 ns) (Overruns: 274)
      con_flush_chars+0x34/0x40 -> set_cursor (38790 ns) (Overruns: 274)
      release_console_sem+0x1ec/0x230 -> up (721 ns) (Overruns: 274)
      release_console_sem+0x225/0x230 -> wake_up_klogd (316 ns) (Overruns: 274)
      con_flush_chars+0x39/0x40 -> release_console_sem (2996 ns) (Overruns: 274)
      con_write+0x22/0x30 -> con_flush_chars (46067 ns) (Overruns: 274)
      n_tty_write+0x1cc/0x360 -> con_write (292670 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2a/0x90 -> native_apic_mem_write (330 ns) (Overruns: 274)
      irq_enter+0x17/0x70 -> idle_cpu (413 ns) (Overruns: 274)
      smp_apic_timer_interrupt+0x2f/0x90 -> irq_enter (1525 ns) (Overruns: 274)
      ktime_get_ts+0x40/0x70 -> getnstimeofday (465 ns) (Overruns: 274)
      ktime_get_ts+0x60/0x70 -> set_normalized_timespec (436 ns) (Overruns: 274)
      ktime_get+0x16/0x30 -> ktime_get_ts (2501 ns) (Overruns: 274)
      hrtimer_interrupt+0x77/0x1a0 -> ktime_get (3439 ns) (Overruns: 274)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0231022c
  4. 16 11月, 2008 3 次提交
    • F
      tracing/function-return-tracer: support for dynamic ftrace on function return tracer · e7d3737e
      Frederic Weisbecker 提交于
      This patch adds the support for dynamic tracing on the function return tracer.
      The whole difference with normal dynamic function tracing is that we don't need
      to hook on a particular callback. The only pro that we want is to nop or set
      dynamically the calls to ftrace_caller (which is ftrace_return_caller here).
      
      Some security checks ensure that we are not trying to launch dynamic tracing for
      return tracing while normal function tracing is already running.
      
      An example of trace with getnstimeofday set as a filter:
      
      ktime_get_ts+0x22/0x50 -> getnstimeofday (2283 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1396 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1825 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1426 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1464 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1524 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1382 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1434 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1464 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1502 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1404 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1397 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1051 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1314 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1344 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1163 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1390 ns)
      ktime_get_ts+0x22/0x50 -> getnstimeofday (1374 ns)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7d3737e
    • F
      tracing/function-return-tracer: add a barrier to ensure return stack index is incremented in memory · b01c7466
      Frederic Weisbecker 提交于
      Impact: fix possible race condition in ftrace function return tracer
      
      This fixes a possible race condition if index incrementation
      is not immediately flushed in memory.
      
      Thanks for Andi Kleen and Steven Rostedt for pointing out this issue
      and give me this solution.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b01c7466
    • S
      ftrace: pass module struct to arch dynamic ftrace functions · 31e88909
      Steven Rostedt 提交于
      Impact: allow archs more flexibility on dynamic ftrace implementations
      
      Dynamic ftrace has largly been developed on x86. Since x86 does not
      have the same limitations as other architectures, the ftrace interaction
      between the generic code and the architecture specific code was not
      flexible enough to handle some of the issues that other architectures
      have.
      
      Most notably, module trampolines. Due to the limited branch distance
      that archs make in calling kernel core code from modules, the module
      load code must create a trampoline to jump to what will make the
      larger jump into core kernel code.
      
      The problem arises when this happens to a call to mcount. Ftrace checks
      all code before modifying it and makes sure the current code is what
      it expects. Right now, there is not enough information to handle modifying
      module trampolines.
      
      This patch changes the API between generic dynamic ftrace code and
      the arch dependent code. There is now two functions for modifying code:
      
        ftrace_make_nop(mod, rec, addr) - convert the code at rec->ip into
             a nop, where the original text is calling addr. (mod is the
             module struct if called by module init)
      
        ftrace_make_caller(rec, addr) - convert the code rec->ip that should
             be a nop into a caller to addr.
      
      The record "rec" now has a new field called "arch" where the architecture
      can add any special attributes to each call site record.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      31e88909
  5. 13 11月, 2008 2 次提交
  6. 11 11月, 2008 3 次提交
    • I
      tracing: function return tracer, build fix · 19b3e967
      Ingo Molnar 提交于
      fix:
      
       arch/x86/kernel/ftrace.c: In function 'ftrace_return_to_handler':
       arch/x86/kernel/ftrace.c:112: error: implicit declaration of function 'cpu_clock'
      
      cpu_clock() is implicitly included via a number of ways, but its real
      location is sched.h. (Build failure is triggerable if enough other
      kernel components are turned off.)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      19b3e967
    • I
      tracing, x86: function return tracer, fix assembly constraints · 867f7fb3
      Ingo Molnar 提交于
      fix:
      
       arch/x86/kernel/ftrace.c: Assembler messages:
       arch/x86/kernel/ftrace.c:140: Error: missing ')'
       arch/x86/kernel/ftrace.c:140: Error: junk `(%ebp))' after expression
       arch/x86/kernel/ftrace.c:141: Error: missing ')'
       arch/x86/kernel/ftrace.c:141: Error: junk `(%ebp))' after expression
      
      the [parent_replaced] is used in an =rm fashion, so that constraint
      is correct in isolation - but [parent_old] aliases register %0 and uses
      it in an addressing mode that is only valid with registers - so change
      the constraint from =rm to =r.
      
      This fixes the build failure.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      867f7fb3
    • F
      tracing, x86: add low level support for ftrace return tracing · caf4b323
      Frederic Weisbecker 提交于
      Impact: add infrastructure for function-return tracing
      
      Add low level support for ftrace return tracing.
      
      This plug-in stores return addresses on the thread_info structure of
      the current task.
      
      The index of the current return address is initialized when the task
      is the first one (init) and when a process forks (the child). It is
      not needed when a task does a sys_execve because after this syscall,
      it still needs to return on the kernel functions it called.
      
      Note that the code of return_to_handler has been suggested by Steven
      Rostedt as almost all of the ideas of improvements in this V3.
      
      For purpose of security, arch/x86/kernel/process_32.c is not traced
      because __switch_to() changes the current task during its execution.
      That could cause inconsistency in the stored return address of this
      function even if I didn't have any crash after testing with tracing on
      this function enabled.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      caf4b323
  7. 31 10月, 2008 3 次提交
    • S
      ftrace: nmi safe code clean ups · a26a2a27
      Steven Rostedt 提交于
      Impact: cleanup
      
      This patch cleans up the NMI safe code for dynamic ftrace as suggested
      by Andrew Morton.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a26a2a27
    • S
      ftrace: nmi update statistics · b807c3d0
      Steven Rostedt 提交于
      Impact: add more debug info to /debugfs/tracing/dyn_ftrace_total_info
      
      This patch adds dynamic ftrace NMI update statistics to the
      /debugfs/tracing/dyn_ftrace_total_info stat file.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b807c3d0
    • S
      ftrace: nmi safe code modification · 17666f02
      Steven Rostedt 提交于
      Impact: fix crashes that can occur in NMI handlers, if their code is modified
      
      Modifying code is something that needs special care. On SMP boxes,
      if code that is being modified is also being executed on another CPU,
      that CPU will have undefined results.
      
      The dynamic ftrace uses kstop_machine to make the system act like a
      uniprocessor system. But this does not address NMIs, that can still
      run on other CPUs.
      
      One approach to handle this is to make all code that are used by NMIs
      not be traced. But NMIs can call notifiers that spread throughout the
      kernel and this will be very hard to maintain, and the chance of missing
      a function is very high.
      
      The approach that this patch takes is to have the NMIs modify the code
      if the modification is taking place. The way this works is that just
      writing to code executing on another CPU is not harmful if what is
      written is the same as what exists.
      
      Two buffers are used: an IP buffer and a "code" buffer.
      
      The steps that the patcher takes are:
      
       1) Put in the instruction pointer into the IP buffer
          and the new code into the "code" buffer.
       2) Set a flag that says we are modifying code
       3) Wait for any running NMIs to finish.
       4) Write the code
       5) clear the flag.
       6) Wait for any running NMIs to finish.
      
      If an NMI is executed, it will also write the pending code.
      Multiple writes are OK, because what is being written is the same.
      Then the patcher must wait for all running NMIs to finish before
      going to the next line that must be patched.
      
      This is basically the RCU approach to code modification.
      
      Thanks to Ingo Molnar for suggesting the idea, and to Arjan van de Ven
      for his guidence on what is safe and what is not.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17666f02
  8. 27 10月, 2008 1 次提交
    • S
      ftrace: use a real variable for ftrace_nop in x86 · 8115f3f0
      Steven Rostedt 提交于
      Impact: avoid section mismatch warning, clean up
      
      The dynamic ftrace determines which nop is safe to use at start up.
      When it finds a safe nop for patching, it sets a pointer called ftrace_nop
      to point to the code. All call sites are then patched to this nop.
      
      Later, when tracing is turned on, this ftrace_nop variable is again used
      to compare the location to make sure it is a nop before we update it to
      an mcount call. If this fails just once, a warning is printed and ftrace
      is disabled.
      
      Rakib Mullick noted that the code that sets up the nop is a .init section
      where as the nop itself is in the .text section. This is needed because
      the nop is used later on after boot up. The problem is that the test of the
      nop jumps back to the setup code and causes a "section mismatch" warning.
      
      Rakib first recommended to convert the nop to .init.text, but as stated
      above, this would fail since that text is used later.
      
      The real solution is to extend Rabik's patch, and to make the ftrace_nop
      into an array, and just save the code from the assembly to this array.
      
      Now the section can stay as an init section, and we have a nop to use
      later on.
      Reported-by: NRakib Mullick <rakib.mullick@gmail.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8115f3f0
  9. 23 10月, 2008 5 次提交
  10. 21 10月, 2008 1 次提交
  11. 14 10月, 2008 6 次提交
    • A
      ftrace: make ftrace_test_p6nop disassembler-friendly · 8b27386a
      Anders Kaseorg 提交于
      Commit 4c3dc21b136f8cb4b72afee16c3ba7e961656c0b in tip introduced the
      5-byte NOP ftrace_test_p6nop:
      
         jmp . + 5
         .byte 0x00, 0x00, 0x00
      
      This is not friendly to disassemblers because an odd number of 0x00s
      ends in the middle of an instruction boundary.  This changes the 0x00s
      to 1-byte NOPs (0x90).
      Signed-off-by: NAnders Kaseorg <andersk@mit.edu>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8b27386a
    • F
      x86/ftrace: use uaccess in atomic context · ac2b86fd
      Frédéric Weisbecker 提交于
      With latest -tip I get this bug:
      
      [   49.439988] in_atomic():0, irqs_disabled():1
      [   49.440118] INFO: lockdep is turned off.
      [   49.440118] Pid: 2814, comm: modprobe Tainted: G        W 2.6.27-rc7 #4
      [   49.440118]  [<c01215e1>] __might_sleep+0xe1/0x120
      [   49.440118]  [<c01148ea>] ftrace_modify_code+0x2a/0xd0
      [   49.440118]  [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa
      [   49.440118]  [<c016e80e>] __ftrace_update_code+0xfe/0x2f0
      [   49.440118]  [<c01148a2>] ? ftrace_test_p6nop+0x0/0xa
      [   49.440118]  [<c016f190>] ftrace_convert_nops+0x50/0x80
      [   49.440118]  [<c016f1d6>] ftrace_init_module+0x16/0x20
      [   49.440118]  [<c015498b>] load_module+0x185b/0x1d30
      [   49.440118]  [<c01767a0>] ? find_get_page+0x0/0xf0
      [   49.440118]  [<c02463c0>] ? sprintf+0x0/0x30
      [   49.440118]  [<c034e012>] ? mutex_lock_interruptible_nested+0x1f2/0x350
      [   49.440118]  [<c0154eb3>] sys_init_module+0x53/0x1b0
      [   49.440118]  [<c0352340>] ? do_page_fault+0x0/0x740
      [   49.440118]  [<c0104012>] syscall_call+0x7/0xb
      [   49.440118]  =======================
      
      It is because ftrace_modify_code() calls copy_to_user and
      copy_from_user.
      These functions have been inserted after guessing that there
      couldn't be any race condition but copy_[to/from]_user might
      sleep and __ftrace_update_code is called with local_irq_saved.
      
      These function have been inserted since this commit:
      d5e92e8978fd2574e415dc2792c5eb592978243d:
      "ftrace: x86 use copy from user function"
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ac2b86fd
    • H
      x86: suppress trivial sparse signedness warnings · 37a52f5e
      Harvey Harrison 提交于
      Could just as easily change the three casts to cast to the correct
      type...this patch changes the type of ftrace_nop instead.
      
      Supresses sparse warnings:
      
       arch/x86/kernel/ftrace.c:157:14: warning: incorrect type in assignment (different signedness)
       arch/x86/kernel/ftrace.c:157:14:    expected long *static [toplevel] ftrace_nop
       arch/x86/kernel/ftrace.c:157:14:    got unsigned long *<noident>
       arch/x86/kernel/ftrace.c:161:14: warning: incorrect type in assignment (different signedness)
       arch/x86/kernel/ftrace.c:161:14:    expected long *static [toplevel] ftrace_nop
       arch/x86/kernel/ftrace.c:161:14:    got unsigned long *<noident>
       arch/x86/kernel/ftrace.c:165:14: warning: incorrect type in assignment (different signedness)
       arch/x86/kernel/ftrace.c:165:14:    expected long *static [toplevel] ftrace_nop
       arch/x86/kernel/ftrace.c:165:14:    got unsigned long *<noident>
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      37a52f5e
    • S
      ftrace: x86 use copy to and from user functions · 6f93fc07
      Steven Rostedt 提交于
      The modification of code is performed either by kstop_machine, before
      SMP starts, or on module code before the module is executed. There is
      no reason to do the modifications from assembly. The copy to and from
      user functions are sufficient and produces cleaner and easier to read
      code.
      
      Thanks to Benjamin Herrenschmidt for suggesting the idea.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f93fc07
    • S
      ftrace: use only 5 byte nops for x86 · 732f3ca7
      Steven Rostedt 提交于
      Mathieu Desnoyers revealed a bug in the original code. The nop that is
      used to relpace the mcount caller can be a two part nop. This runs the
      risk where a process can be preempted after executing the first nop, but
      before the second part of the nop.
      
      The ftrace code calls kstop_machine to keep multiple CPUs from executing
      code that is being modified, but it does not protect against a task preempting
      in the middle of a two part nop.
      
      If the above preemption happens and the tracer is enabled, after the
      kstop_machine runs, all those nops will be calls to the trace function.
      If the preempted process that was preempted between the two nops is executed
      again, it will execute half of the call to the trace function, and this
      might crash the system.
      
      This patch instead uses what both the latest Intel and AMD spec suggests.
      That is the P6_NOP5 sequence of "0x0f 0x1f 0x44 0x00 0x00".
      
      Note, some older CPUs and QEMU might fault on this nop, so this nop
      is executed with fault handling first. If it detects a fault, it will then
      use the code "0x66 0x66 0x66 0x66 0x90". If that faults, it will then
      default to a simple "jmp 1f; .byte 0x00 0x00 0x00; 1:". The jmp is
      not optimal but will do if the first two can not be executed.
      
      TODO: Examine the cpuid to determine the nop to use.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      732f3ca7
    • S
      ftrace: x86 mcount stub · 0a37605c
      Steven Rostedt 提交于
      x86 now sets up the mcount locations through the build and no longer
      needs to record the ip when the function is executed. This patch changes
      the initial mcount to simply return. There's no need to do any other work.
      If the ftrace start up test fails, the original mcount will be what everything
      will use, so having this as fast as possible is a good thing.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0a37605c
  12. 24 6月, 2008 1 次提交
  13. 17 6月, 2008 1 次提交
  14. 10 6月, 2008 1 次提交
  15. 24 5月, 2008 5 次提交
    • S
      ftrace: fix the fault label in updating code · a56be3fe
      Steven Rostedt 提交于
      The fault label to jump to on fault of updating the code was misplaced
      preventing the fault from being recorded.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a56be3fe
    • S
      ftrace: use dynamic patching for updating mcount calls · d61f82d0
      Steven Rostedt 提交于
      This patch replaces the indirect call to the mcount function
      pointer with a direct call that will be patched by the
      dynamic ftrace routines.
      
      On boot up, the mcount function calls the ftace_stub function.
      When the dynamic ftrace code is initialized, the ftrace_stub
      is replaced with a call to the ftrace_record_ip, which records
      the instruction pointers of the locations that call it.
      
      Later, the ftraced daemon will call kstop_machine and patch all
      the locations to nops.
      
      When a ftrace is enabled, the original calls to mcount will now
      be set top call ftrace_caller, which will do a direct call
      to the registered ftrace function. This direct call is also patched
      when the function that should be called is updated.
      
      All patching is performed by a kstop_machine routine to prevent any
      type of race conditions that is associated with modifying code
      on the fly.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      d61f82d0
    • S
      ftrace: move memory management out of arch code · 3c1720f0
      Steven Rostedt 提交于
      This patch moves the memory management of the ftrace
      records out of the arch code and into the generic code
      making the arch code simpler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3c1720f0
    • S
      ftrace: use nops instead of jmp · dfa60aba
      Steven Rostedt 提交于
      This patch patches the call to mcount with nops instead
      of a jmp over the mcount call.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      dfa60aba
    • S
      ftrace: dynamic enabling/disabling of function calls · 3d083395
      Steven Rostedt 提交于
      This patch adds a feature to dynamically replace the ftrace code
      with the jmps to allow a kernel with ftrace configured to run
      as fast as it can without it configured.
      
      The way this works, is on bootup (if ftrace is enabled), a ftrace
      function is registered to record the instruction pointer of all
      places that call the function.
      
      Later, if there's still any code to patch, a kthread is awoken
      (rate limited to at most once a second) that performs a stop_machine,
      and replaces all the code that was called with a jmp over the call
      to ftrace. It only replaces what was found the previous time. Typically
      the system reaches equilibrium quickly after bootup and there's no code
      patching needed at all.
      
      e.g.
      
        call ftrace  /* 5 bytes */
      
      is replaced with
      
        jmp 3f  /* jmp is 2 bytes and we jump 3 forward */
      3:
      
      When we want to enable ftrace for function tracing, the IP recording
      is removed, and stop_machine is called again to replace all the locations
      of that were recorded back to the call of ftrace.  When it is disabled,
      we replace the code back to the jmp.
      
      Allocation is done by the kthread. If the ftrace recording function is
      called, and we don't have any record slots available, then we simply
      skip that call. Once a second a new page (if needed) is allocated for
      recording new ftrace function calls.  A large batch is allocated at
      boot up to get most of the calls there.
      
      Because we do this via stop_machine, we don't have to worry about another
      CPU executing a ftrace call as we modify it. But we do need to worry
      about NMI's so all functions that might be called via nmi must be
      annotated with notrace_nmi. When this code is configured in, the NMI code
      will not call notrace.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3d083395