1. 24 1月, 2013 1 次提交
  2. 23 1月, 2013 12 次提交
    • S
      ring-buffer: Remove trace.h from ring_buffer.c · 0b07436d
      Steven Rostedt 提交于
      ring_buffer.c use to require declarations from trace.h, but
      these have moved to the generic header files. There's nothing
      in trace.h that ring_buffer.c requires.
      
      There's some headers that trace.h included that ring_buffer.c
      needs, but it's best that it includes them directly, and not
      include trace.h.
      
      Also, some things may use ring_buffer.c without having tracing
      configured. This removes the dependency that may come in the
      future.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0b07436d
    • S
      ring-buffer: User context bit recursion checking · 567cd4da
      Steven Rostedt 提交于
      Using context bit recursion checking, we can help increase the
      performance of the ring buffer.
      
      Before this patch:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 9.712
      Time: 9.824
      Time: 9.861
      Time: 9.827
      Time: 9.962
      Time: 9.905
      Time: 9.886
      Time: 10.088
      Time: 9.861
      Time: 9.834
      
      (average: 9.876)
      
       a 4% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      567cd4da
    • S
      ftrace: Use only the preempt version of function tracing · 897f68a4
      Steven Rostedt 提交于
      The function tracer had two different versions of function tracing.
      
      The disabling of irqs version and the preempt disable version.
      
      As function tracing in very intrusive and can cause nasty recursion
      issues, it has its own recursion protection. But the old method to
      do this was a flat layer. If it detected that a recursion was happening
      then it would just return without recording.
      
      This made the preempt version (much faster than the irq disabling one)
      not very useful, because if an interrupt were to occur after the
      recursion flag was set, the interrupt would not be traced at all,
      because every function that was traced would think it recursed on
      itself (due to the context it preempted setting the recursive flag).
      
      Now that we have a recursion flag for every context level, we
      no longer need to worry about that. We can disable preemption,
      set the current context recursion check bit, and go on. If an
      interrupt were to come along, it would check its own context bit
      and happily continue to trace.
      
      As the preempt version is faster than the irq disable version,
      there's no more reason to keep the preempt version around.
      And the irq disable version still had an issue with missing
      out on tracing NMI code.
      
      Remove the irq disable function tracer version and have the
      preempt disable version be the default (and only version).
      
      Before this patch we had from running:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 12.028
      Time: 11.945
      Time: 11.925
      Time: 11.964
      Time: 12.002
      Time: 11.910
      Time: 11.944
      Time: 11.929
      Time: 11.941
      Time: 11.924
      
      (average: 11.9512)
      
      Now we have:
      
       # echo function > /debug/tracing/current_tracer
       # for i in `seq 10`; do ./hackbench 50; done
      Time: 10.285
      Time: 10.407
      Time: 10.243
      Time: 10.372
      Time: 10.380
      Time: 10.198
      Time: 10.272
      Time: 10.354
      Time: 10.248
      Time: 10.253
      
      (average: 10.3012)
      
       a 13.8% savings!
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      897f68a4
    • S
      tracing: Avoid unnecessary multiple recursion checks · edc15caf
      Steven Rostedt 提交于
      When function tracing occurs, the following steps are made:
        If arch does not support a ftrace feature:
         call internal function (uses INTERNAL bits) which calls...
        If callback is registered to the "global" list, the list
         function is called and recursion checks the GLOBAL bits.
         then this function calls...
        The function callback, which can use the FTRACE bits to
         check for recursion.
      
      Now if the arch does not suppport a feature, and it calls
      the global list function which calls the ftrace callback
      all three of these steps will do a recursion protection.
      There's no reason to do one if the previous caller already
      did. The recursion that we are protecting against will
      go through the same steps again.
      
      To prevent the multiple recursion checks, if a recursion
      bit is set that is higher than the MAX bit of the current
      check, then we know that the check was made by the previous
      caller, and we can skip the current check.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      edc15caf
    • S
      tracing: Make the trace recursion bits into enums · e46cbf75
      Steven Rostedt 提交于
      Convert the bits into enums which makes the code a little easier
      to maintain.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e46cbf75
    • S
      ftrace: Add context level recursion bit checking · c29f122c
      Steven Rostedt 提交于
      Currently for recursion checking in the function tracer, ftrace
      tests a task_struct bit to determine if the function tracer had
      recursed or not. If it has, then it will will return without going
      further.
      
      But this leads to races. If an interrupt came in after the bit
      was set, the functions being traced would see that bit set and
      think that the function tracer recursed on itself, and would return.
      
      Instead add a bit for each context (normal, softirq, irq and nmi).
      
      A check of which context the task is in is made before testing the
      associated bit. Now if an interrupt preempts the function tracer
      after the previous context has been set, the interrupt functions
      can still be traced.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c29f122c
    • S
      ftrace: Optimize the function tracer list loop · 0a016409
      Steven Rostedt 提交于
      There is lots of places that perform:
      
             op = rcu_dereference_raw(ftrace_control_list);
             while (op != &ftrace_list_end) {
      
      Add a helper macro to do this, and also optimize for a single
      entity. That is, gcc will optimize a loop for either no iterations
      or more than one iteration. But usually only a single callback
      is registered to the function tracer, thus the optimized case
      should be a single pass. to do this we now do:
      
      	op = rcu_dereference_raw(list);
      	do {
      		[...]
      	} while (likely(op = rcu_dereference_raw((op)->next)) &&
      	       unlikely((op) != &ftrace_list_end));
      
      An op is always registered (ftrace_list_end when no callbacks is
      registered), thus when a single callback is registered, the link
      list looks like:
      
       top => callback => ftrace_list_end => NULL.
      
      The likely(op = op->next) still must be performed due to the race
      of removing the callback, where the first op assignment could
      equal ftrace_list_end. In that case, the op->next would be NULL.
      But this is unlikely (only happens in a race condition when
      removing the callback).
      
      But it is very likely that the next op would be ftrace_list_end,
      unless more than one callback has been registered. This tells
      gcc what the most common case is and makes the fast path with
      the least amount of branches.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a016409
    • S
      ftrace: Fix function tracing recursion self test · 9640388b
      Steven Rostedt 提交于
      The function tracing recursion self test should not crash
      the machine if the resursion test fails. If it detects that
      the function tracing is recursing when it should not be, then
      bail, don't go into an infinite recursive loop.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9640388b
    • S
      ftrace: Fix global function tracers that are not recursion safe · 63503794
      Steven Rostedt 提交于
      If one of the function tracers set by the global ops is not recursion
      safe, it can still be called directly without the added recursion
      supplied by the ftrace infrastructure.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      63503794
    • S
      tracing: Fix selftest function recursion accounting · 05cbbf64
      Steven Rostedt 提交于
      The test that checks function recursion does things differently
      if the arch does not support all ftrace features. But that really
      doesn't make a difference with how the test runs, and either way
      the count variable should be 2 at the end.
      
      Currently the test wrongly fails for archs that don't support all
      the ftrace features.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      05cbbf64
    • S
      tracing: Fix race with max_tr and changing tracers · 34600f0e
      Steven Rostedt 提交于
      There's a race condition between the setting of a new tracer and
      the update of the max trace buffers (the swap). When a new tracer
      is added, it sets current_trace to nop_trace before disabling
      the old tracer. At this moment, if the old tracer uses update_max_tr(),
      the update may trigger the warning against !current_trace->use_max-tr,
      as nop_trace doesn't have that set.
      
      As update_max_tr() requires that interrupts be disabled, we can
      add a check to see if current_trace == nop_trace and bail if it
      does. Then when disabling the current_trace, set it to nop_trace
      and run synchronize_sched(). This will make sure all calls to
      update_max_tr() have completed (it was called with interrupts disabled).
      
      As a clean up, this commit also removes shrinking and recreating
      the max_tr buffer if the old and new tracers both have use_max_tr set.
      The old way use to always shrink the buffer, and then expand it
      for the next tracer. This is a waste of time.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      34600f0e
    • S
      tracing: Remove trace.h header from trace_clock.c · 0a71e4c6
      Steven Rostedt 提交于
      As trace_clock is used by other things besides tracing, and it
      does not require anything from trace.h, it is best not to include
      the header file in trace_clock.c.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0a71e4c6
  3. 22 1月, 2013 14 次提交
    • S
      tracing: Remove the extra 4 bytes of padding in events · b000c806
      Steven Rostedt 提交于
      Due to a userspace issue with PowerTop v2beta, which hardcoded
      the offset of event fields that it was using, it broke when
      we removed the Big Kernel Lock counter from the event header.
      
       (commit e6e1e259 "tracing: Remove lock_depth from event entry")
      
      Because this broke userspace, it was determined that we must
      keep those 4 bytes around.
      
       (commit a3a4a5ac "Regression: partial revert "tracing: Remove lock_depth from event entry"")
      
      This unfortunately wastes space in the ring buffer. 4 bytes per
      event, where a lot of events are just 24 bytes. That's 16% of the
      buffer wasted. A million events will add 4 megs of white space
      into the buffer.
      
      It was later noticed that PowerTop v2beta could not work on systems
      where the kernel was 64 bit but the userspace was 32 bits.
      The reason was because the offsets are different between the
      two and the hard coded offset of one would not work with the other.
      
      With PowerTop v2 final, it implemented the same interface that both
      perf and trace-cmd use. That is, it reads the format file of
      the event to find the offsets of the fields it needs. This fixes
      the problem with running powertop on a 32 bit userspace running
      on a 64 bit kernel. It also no longer requires the 4 byte padding.
      
      As PowerTop v2 has been out for a while, and is included in all
      major distributions, it is time that we can safely remove the
      4 bytes of padding. Users of PowerTop v2beta should upgrade to
      PowerTop v2 final.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b000c806
    • M
      kprobes/x86: Move kprobes stuff under arch/x86/kernel/kprobes/ · f684199f
      Masami Hiramatsu 提交于
      Move arch-dep kprobes stuff under arch/x86/kernel/kprobes.
      
      Link: http://lkml.kernel.org/r/20120928081522.3560.75469.stgit@ltc138.sdl.hitachi.co.jp
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      [ fixed whitespace and s/__attribute__((packed))/__packed/ ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f684199f
    • M
      kprobes/x86: Move ftrace-based kprobe code into kprobes-ftrace.c · e7dbfe34
      Masami Hiramatsu 提交于
      Split ftrace-based kprobes code from kprobes, and introduce
      CONFIG_(HAVE_)KPROBES_ON_FTRACE Kconfig flags.
      For the cleanup reason, this also moves kprobe_ftrace check
      into skip_singlestep.
      
      Link: http://lkml.kernel.org/r/20120928081520.3560.25624.stgit@ltc138.sdl.hitachi.co.jp
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e7dbfe34
    • M
      ftrace: Move ARCH_SUPPORTS_FTRACE_SAVE_REGS in Kconfig · 06aeaaea
      Masami Hiramatsu 提交于
      Move SAVE_REGS support flag into Kconfig and rename
      it to CONFIG_DYNAMIC_FTRACE_WITH_REGS. This also introduces
      CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS which indicates
      the architecture depending part of ftrace has a code
      that saves full registers.
      On the other hand, CONFIG_DYNAMIC_FTRACE_WITH_REGS indicates
      the code is enabled.
      
      Link: http://lkml.kernel.org/r/20120928081516.3560.72534.stgit@ltc138.sdl.hitachi.co.jp
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      06aeaaea
    • S
      tracing/fgraph: Add max_graph_depth to limit function_graph depth · 8741db53
      Steven Rostedt 提交于
      Add the file max_graph_depth to the debug tracing directory that lets
      the user define the depth of the function graph.
      
      A very useful operation is to set the depth to 1. Then it traces only
      the first function that is called when entering the kernel. This can
      be used to determine what system operations interrupt a process.
      
      For example, to work on NOHZ processes (single tasks running without
      a timer tick), if any interrupt goes off and preempts that task, this
      code will show it happening.
      
        # cd /sys/kernel/debug/tracing
        # echo 1 > max_graph_depth
        # echo function_graph > current_tracer
        # cat per_cpu/cpu/<cpu-of-process>/trace
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8741db53
    • S
      tracing/lockdep: Disable lockdep first in entering NMI · 0f1ac8fd
      Steven Rostedt 提交于
      When function tracing with either debug locks enabled or tracing
      preempt disabled, the add_preempt_count() is traced. This is an
      issue with lockdep and function tracing. As function tracing
      can disable interrupts, and lockdep records that change,
      lockdep may not be able to handle this recursion if it happens from
      an NMI context.
      
      The first thing that an NMI does is:
      
       #define nmi_enter()					\
      	do {							\
      		ftrace_nmi_enter();				\
      		BUG_ON(in_nmi());				\
      		add_preempt_count(NMI_OFFSET + HARDIRQ_OFFSET);	\
      		lockdep_off();					\
      		rcu_nmi_enter();				\
      		trace_hardirq_enter();				\
      	} while (0)
      
      When the add_preempt_count() is traced, and the tracing callback
      disables interrupts, it will jump into the lockdep code. There's
      some places in lockdep that can't handle this re-entrance, and
      causes lockdep to fail.
      
      As the lockdep_off() (and lockdep_on) is a simple:
      
      void lockdep_off(void)
      {
      	current->lockdep_recursion++;
      }
      
      and is never traced, it can be called first in nmi_enter()
      and lockdep_on() last in nmi_exit().
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0f1ac8fd
    • S
      tracing: Remove unneeded check of max_tr->buffer before tracing_reset · 84c6cf0d
      Steven Rostedt 提交于
      There's now a check in tracing_reset_online_cpus() if the buffer is
      allocated or NULL. No need to do a check before calling it with max_tr.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      84c6cf0d
    • H
      tracing: Add checks if tr->buffer is NULL in tracing_reset{_online_cpus} · a5416411
      Hiraku Toyooka 提交于
      max_tr->buffer could be NULL in the tracing_reset{_online_cpus}. In this
      case, a NULL pointer dereference happens, so we should return immediately
      from these functions.
      
      Note, the current code does not call tracing_reset*() with max_tr when
      its buffer is NULL, but future code will. This patch is needed to prevent
      the future code from crashing.
      
      Link: http://lkml.kernel.org/r/20121219070234.31200.93863.stgit@liselsiaSigned-off-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a5416411
    • F
      tracing/syscalls: Make local functions static · 6aea49cb
      Fengguang Wu 提交于
      Some functions in the syscall tracing is used only locally to
      the file, but they are labeled global. Convert them to static functions.
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6aea49cb
    • J
      tracing: Verify target file before registering a uprobe event · d24d7dbf
      Jovi Zhang 提交于
      Without this patch, we can register a uprobe event for a directory.
      Enabling such a uprobe event would anyway fail.
      
      Example:
      $ echo 'p /bin:0x4245c0' > /sys/kernel/debug/tracing/uprobe_events
      
      However dirctories cannot be valid targets for uprobe.
      Hence verify if the target is a regular file during the probe
      registration.
      
      Link: http://lkml.kernel.org/r/20130103004212.690763002@goodmis.org
      
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: NJovi Zhang <bookjovi@gmail.com>
      Acked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      [ cleaned up whitespace and removed redundant IS_DIR() check ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d24d7dbf
    • S
      tracing: Use this_cpu_ptr per-cpu helper · d8a0349c
      Shan Wei 提交于
      typeof(&buffer) is a pointer to array of 1024 char, or char (*)[1024].
      But, typeof(&buffer[0]) is a pointer to char which match the return type of get_trace_buf().
      As well-known, the value of &buffer is equal to &buffer[0].
      so return this_cpu_ptr(&percpu_buffer->buffer[0]) can avoid type cast.
      
      Link: http://lkml.kernel.org/r/50A1A800.3020102@gmail.comReviewed-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NShan Wei <davidshan@tencent.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d8a0349c
    • S
      ring-buffer: Remove unnecessary recusive call in rb_advance_iter() · 771e0384
      Steven Rostedt 提交于
      The original ring-buffer code had special checks at the start
      of rb_advance_iter() and instead of repeating them again at the
      end of the function if a certain condition existed, I just did
      a recursive call to rb_advance_iter() because the special condition
      would cause rb_advance_iter() to return early (after the checks).
      
      But as things have changed, the special checks no longer exist
      and the only thing done for the special_condition is to call
      rb_inc_iter() and return. Instead of doing a confusing recursive call,
      just call rb_inc_iter instead.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      771e0384
    • S
      tracing: Fix sparse warning with is_signed_type() macro · 418c59e4
      Steven Rostedt 提交于
      Sparse complains when is_signed_type() is used on a pointer.
      This macro is needed for the format output used for ftrace
      and perf, to know if a binary field is a signed type or not.
      The is_signed_type() macro is used against all fields that are
      recorded by events to automate the operation.
      
      The problem sparse has is with the current way is_signed_type()
      works:
      
        ((type)-1 < 0)
      
      If "type" is a poiner, than sparse does not like it being compared
      to an integer (zero). The simple fix is to just give zero the
      same type. The runtime result stays the same.
      Reported-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      418c59e4
    • S
      ftrace: Be first to run code modification on modules · c1bf08ac
      Steven Rostedt 提交于
      If some other kernel subsystem has a module notifier, and adds a kprobe
      to a ftrace mcount point (now that kprobes work on ftrace points),
      when the ftrace notifier runs it will fail and disable ftrace, as well
      as kprobes that are attached to ftrace points.
      
      Here's the error:
      
       WARNING: at kernel/trace/ftrace.c:1618 ftrace_bug+0x239/0x280()
       Hardware name: Bochs
       Modules linked in: fat(+) stap_56d28a51b3fe546293ca0700b10bcb29__8059(F) nfsv4 auth_rpcgss nfs dns_resolver fscache xt_nat iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack lockd sunrpc ppdev parport_pc parport microcode virtio_net i2c_piix4 drm_kms_helper ttm drm i2c_core [last unloaded: bid_shared]
       Pid: 8068, comm: modprobe Tainted: GF            3.7.0-0.rc8.git0.1.fc19.x86_64 #1
       Call Trace:
        [<ffffffff8105e70f>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff81134106>] ? __probe_kernel_read+0x46/0x70
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffffa0180000>] ? 0xffffffffa017ffff
        [<ffffffff8105e76a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff810fd189>] ftrace_bug+0x239/0x280
        [<ffffffff810fd626>] ftrace_process_locs+0x376/0x520
        [<ffffffff810fefb7>] ftrace_module_notify+0x47/0x50
        [<ffffffff8163912d>] notifier_call_chain+0x4d/0x70
        [<ffffffff810882f8>] __blocking_notifier_call_chain+0x58/0x80
        [<ffffffff81088336>] blocking_notifier_call_chain+0x16/0x20
        [<ffffffff810c2a23>] sys_init_module+0x73/0x220
        [<ffffffff8163d719>] system_call_fastpath+0x16/0x1b
       ---[ end trace 9ef46351e53bbf80 ]---
       ftrace failed to modify [<ffffffffa0180000>] init_once+0x0/0x20 [fat]
        actual: cc:bb:d2:4b:e1
      
      A kprobe was added to the init_once() function in the fat module on load.
      But this happened before ftrace could have touched the code. As ftrace
      didn't run yet, the kprobe system had no idea it was a ftrace point and
      simply added a breakpoint to the code (0xcc in the cc:bb:d2:4b:e1).
      
      Then when ftrace went to modify the location from a call to mcount/fentry
      into a nop, it didn't see a call op, but instead it saw the breakpoint op
      and not knowing what to do with it, ftrace shut itself down.
      
      The solution is to simply give the ftrace module notifier the max priority.
      This should have been done regardless, as the core code ftrace modification
      also happens very early on in boot up. This makes the module modification
      closer to core modification.
      
      Link: http://lkml.kernel.org/r/20130107140333.593683061@goodmis.org
      
      Cc: stable@vger.kernel.org
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Reported-by: NFrank Ch. Eigler <fche@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c1bf08ac
  4. 18 1月, 2013 2 次提交
  5. 17 1月, 2013 11 次提交