1. 07 3月, 2014 3 次提交
  2. 04 3月, 2014 2 次提交
    • P
      ftrace/x86: One more missing sync after fixup of function modification failure · 12729f14
      Petr Mladek 提交于
      If a failure occurs while modifying ftrace function, it bails out and will
      remove the tracepoints to be back to what the code originally was.
      
      There is missing the final sync run across the CPUs after the fix up is done
      and before the ftrace int3 handler flag is reset.
      
      Here's the description of the problem:
      
      	CPU0				CPU1
      	----				----
        remove_breakpoint();
        modifying_ftrace_code = 0;
      
      				[still sees breakpoint]
      				<takes trap>
      				[sees modifying_ftrace_code as zero]
      				[no breakpoint handler]
      				[goto failed case]
      				[trap exception - kernel breakpoint, no
      				 handler]
      				BUG()
      
      Link: http://lkml.kernel.org/r/1393258342-29978-2-git-send-email-pmladek@suse.cz
      
      Fixes: 8a4d0a68 "ftrace: Use breakpoint method to update ftrace caller"
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12729f14
    • S
      ftrace/x86: Run a sync after fixup on failure · c932c6b7
      Steven Rostedt (Red Hat) 提交于
      If a failure occurs while enabling a trace, it bails out and will remove
      the tracepoints to be back to what the code originally was. But the fix
      up had some bugs in it. By injecting a failure in the code, the fix up
      ran to completion, but shortly afterward the system rebooted.
      
      There was two bugs here.
      
      The first was that there was no final sync run across the CPUs after the
      fix up was done, and before the ftrace int3 handler flag was reset. That
      means that other CPUs could still see the breakpoint and trigger on it
      long after the flag was cleared, and the int3 handler would think it was
      a spurious interrupt. Worse yet, the int3 handler could hit other breakpoints
      because the ftrace int3 handler flag would have prevented the int3 handler
      from going further.
      
      Here's a description of the issue:
      
      	CPU0				CPU1
      	----				----
        remove_breakpoint();
        modifying_ftrace_code = 0;
      
      				[still sees breakpoint]
      				<takes trap>
      				[sees modifying_ftrace_code as zero]
      				[no breakpoint handler]
      				[goto failed case]
      				[trap exception - kernel breakpoint, no
      				 handler]
      				BUG()
      
      The second bug was that the removal of the breakpoints required the
      "within()" logic updates instead of accessing the ip address directly.
      As the kernel text is mapped read-only when CONFIG_DEBUG_RODATA is set, and
      the removal of the breakpoint is a modification of the kernel text.
      The ftrace_write() includes the "within()" logic, where as, the
      probe_kernel_write() does not. This prevented the breakpoint from being
      removed at all.
      
      Link: http://lkml.kernel.org/r/1392650573-3390-1-git-send-email-pmladek@suse.czReported-by: NPetr Mladek <pmladek@suse.cz>
      Tested-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c932c6b7
  3. 12 2月, 2014 1 次提交
    • S
      ftrace/x86: Use breakpoints for converting function graph caller · 87fbb2ac
      Steven Rostedt (Red Hat) 提交于
      When the conversion was made to remove stop machine and use the breakpoint
      logic instead, the modification of the function graph caller is still
      done directly as though it was being done under stop machine.
      
      As it is not converted via stop machine anymore, there is a possibility
      that the code could be layed across cache lines and if another CPU is
      accessing that function graph call when it is being updated, it could
      cause a General Protection Fault.
      
      Convert the update of the function graph caller to use the breakpoint
      method as well.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: stable@vger.kernel.org # 3.5+
      Fixes: 08d636b6 "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87fbb2ac
  4. 06 11月, 2013 1 次提交
  5. 17 11月, 2012 1 次提交
  6. 20 7月, 2012 2 次提交
    • S
      ftrace/x86: Add save_regs for i386 function calls · 4de72395
      Steven Rostedt 提交于
      Add saving full regs for function tracing on i386.
      The saving of regs was influenced by patches sent out by
      Masami Hiramatsu.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.379060003@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4de72395
    • S
      ftrace/x86: Add separate function to save regs · 08f6fba5
      Steven Rostedt 提交于
      Add a way to have different functions calling different trampolines.
      If a ftrace_ops wants regs saved on the return, then have only the
      functions with ops registered to save regs. Functions registered by
      other ops would not be affected, unless the functions overlap.
      
      If one ftrace_ops registered functions A, B and C and another ops
      registered fucntions to save regs on A, and D, then only functions
      A and D would be saving regs. Function B and C would work as normal.
      Although A is registered by both ops: normal and saves regs; this is fine
      as saving the regs is needed to satisfy one of the ops that calls it
      but the regs are ignored by the other ops function.
      
      x86_64 implements the full regs saving, and i386 just passes a NULL
      for regs to satisfy the ftrace_ops passing. Where an arch must supply
      both regs and ftrace_ops parameters, even if regs is just NULL.
      
      It is OK for an arch to pass NULL regs. All function trace users that
      require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when
      registering the ftrace_ops. If the arch does not support saving regs
      then the ftrace_ops will fail to register. The flag
      FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the
      ftrace_ops from failing to register. In this case, the handler may
      either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS.
      If the arch supports passing regs it will set this macro and pass regs
      for ops that request them. All other archs will just pass NULL.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org
      
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08f6fba5
  7. 01 6月, 2012 2 次提交
    • S
      ftrace: Use breakpoint method to update ftrace caller · 8a4d0a68
      Steven Rostedt 提交于
      On boot up and module load, it is fine to modify the code directly,
      without the use of breakpoints. This is because boot up modification
      is done before SMP is initialized, thus the modification is serial,
      and module load is done before the module executes.
      
      But after that we must use a SMP safe method to modify running code.
      Otherwise, if we are running the function tracer and update its
      function (by starting off the stack tracer, or perf tracing)
      the change of the function called by the ftrace trampoline is done
      directly. If this is being executed on another CPU, that CPU may
      take a GPF and crash the kernel.
      
      The breakpoint method is used to change the nops at all the functions, but
      the change of the ftrace callback handler itself was still using a
      direct modification. If tracing was enabled and the function callback
      was changed then another CPU could fault if it was currently calling
      the original callback. This modification must use the breakpoint method
      too.
      
      Note, the direct method is still used for boot up and module load.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8a4d0a68
    • S
      ftrace: Synchronize variable setting with breakpoints · a192cd04
      Steven Rostedt 提交于
      When the function tracer starts modifying the code via breakpoints
      it sets a variable (modifying_ftrace_code) to inform the breakpoint
      handler to call the ftrace int3 code.
      
      But there's no synchronization between setting this code and the
      handler, thus it is possible for the handler to be called on another
      CPU before it sees the variable. This will cause a kernel crash as
      the int3 handler will not know what to do with it.
      
      I originally added smp_mb()'s to force the visibility of the variable
      but H. Peter Anvin suggested that I just make it atomic.
      
      [ Added comments as suggested by Peter Zijlstra ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a192cd04
  8. 17 5月, 2012 1 次提交
  9. 04 5月, 2012 1 次提交
  10. 28 4月, 2012 2 次提交
    • S
      ftrace/x86: Remove the complex ftrace NMI handling code · 4a6d70c9
      Steven Rostedt 提交于
      As ftrace function tracing would require modifying code that could
      be executed in NMI context, which is not stopped with stop_machine(),
      ftrace had to do a complex algorithm with various stages of setup
      and memory barriers to make it work.
      
      With the new breakpoint method, this is no longer required. The changes
      to the code can be done without any problem in NMI context, as well as
      without stop machine altogether. Remove the complex code as it is
      no longer needed.
      
      Also, a lot of the notrace annotations could be removed from the
      NMI code as it is now safe to trace them. With the exception of
      do_nmi itself, which does some special work to handle running in
      the debug stack. The breakpoint method can cause NMIs to double
      nest the debug stack if it's not setup properly, and that is done
      in do_nmi(), thus that function must not be traced.
      
      (Note the arch sh may want to do the same)
      
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4a6d70c9
    • S
      ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine · 08d636b6
      Steven Rostedt 提交于
      This method changes x86 to add a breakpoint to the mcount locations
      instead of calling stop machine.
      
      Now that iret can be handled by NMIs, we perform the following to
      update code:
      
      1) Add a breakpoint to all locations that will be modified
      
      2) Sync all cores
      
      3) Update all locations to be either a nop or call (except breakpoint
         op)
      
      4) Sync all cores
      
      5) Remove the breakpoint with the new code.
      
      6) Sync all cores
      
      [
        Added updates that Masami suggested:
         Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
         Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
      ]
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08d636b6
  11. 26 5月, 2011 1 次提交
  12. 19 4月, 2011 1 次提交
  13. 10 3月, 2011 1 次提交
    • S
      ftrace/graph: Trace function entry before updating index · 722b3c74
      Steven Rostedt 提交于
      Currently the index to the ret_stack is updated and the real return address
      is saved in the ret_stack. Then we call the trace function. The trace
      function could decide that it doesn't want to trace this function
      (ex. set_graph_function does not match) and it will return 0 which means
      not to trace this call.
      
      The normal function graph tracer has this code:
      
      	if (!(trace->depth || ftrace_graph_addr(trace->func)) ||
      	      ftrace_graph_ignore_irqs())
      		return 0;
      
      What this states is, if the trace depth (which is curr_ret_stack)
      is zero (top of nested functions) then test if we want to trace this
      function. If this function is not to be traced, then return  0 and
      the rest of the function graph tracer logic will not trace this function.
      
      The problem arises when an interrupt comes in after we updated the
      curr_ret_stack. The next function that gets called will have a trace->depth
      of 1. Which fools this trace code into thinking that we are in a nested
      function, and that we should trace. This causes interrupts to be traced
      when they should not be.
      
      The solution is to trace the function first and then update the ret_stack.
      Reported-by: Nzhiping zhong <xzhong86@163.com>
      Reported-by: Nwu zhangjin <wuzhangjin@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      722b3c74
  14. 30 12月, 2010 1 次提交
  15. 18 11月, 2010 1 次提交
    • M
      x86: Add RO/NX protection for loadable kernel modules · 84e1c6bb
      matthieu castet 提交于
      This patch is a logical extension of the protection provided by
      CONFIG_DEBUG_RODATA to LKMs. The protection is provided by
      splitting module_core and module_init into three logical parts
      each and setting appropriate page access permissions for each
      individual section:
      
       1. Code: RO+X
       2. RO data: RO+NX
       3. RW data: RW+NX
      
      In order to achieve proper protection, layout_sections() have
      been modified to align each of the three parts mentioned above
      onto page boundary. Next, the corresponding page access
      permissions are set right before successful exit from
      load_module(). Further, free_module() and sys_init_module have
      been modified to set module_core and module_init as RW+NX right
      before calling module_free().
      
      By default, the original section layout and access flags are
      preserved. When compiled with CONFIG_DEBUG_SET_MODULE_RONX=y,
      the patch will page-align each group of sections to ensure that
      each page contains only one type of content and will enforce
      RO/NX for each group of pages.
      
        -v1: Initial proof-of-concept patch.
        -v2: The patch have been re-written to reduce the number of #ifdefs
             and to make it architecture-agnostic. Code formatting has also
             been corrected.
        -v3: Opportunistic RO/NX protection is now unconditional. Section
             page-alignment is enabled when CONFIG_DEBUG_RODATA=y.
        -v4: Removed most macros and improved coding style.
        -v5: Changed page-alignment and RO/NX section size calculation
        -v6: Fixed comments. Restricted RO/NX enforcement to x86 only
        -v7: Introduced CONFIG_DEBUG_SET_MODULE_RONX, added
             calls to set_all_modules_text_rw() and set_all_modules_text_ro()
             in ftrace
        -v8: updated for compatibility with linux 2.6.33-rc5
        -v9: coding style fixes
       -v10: more coding style fixes
       -v11: minor adjustments for -tip
       -v12: minor adjustments for v2.6.35-rc2-tip
       -v13: minor adjustments for v2.6.37-rc1-tip
      Signed-off-by: NSiarhei Liakh <sliakh.lkml@gmail.com>
      Signed-off-by: NXuxian Jiang <jiang@cs.ncsu.edu>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Reviewed-by: NJames Morris <jmorris@namei.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Kees Cook <kees.cook@canonical.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <4CE2F914.9070106@free.fr>
      [ minor cleanliness edits, -v14: build failure fix ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84e1c6bb
  16. 21 9月, 2010 1 次提交
  17. 25 2月, 2010 1 次提交
    • S
      ftrace: Remove memory barriers from NMI code when not needed · 0c54dd34
      Steven Rostedt 提交于
      The code in stop_machine that modifies the kernel text has a bit
      of logic to handle the case of NMIs. stop_machine does not prevent
      NMIs from executing, and if an NMI were to trigger on another CPU
      as the modifying CPU is changing the NMI text, a GPF could result.
      
      To prevent the GPF, the NMI calls ftrace_nmi_enter() which may
      modify the code first, then any other NMIs will just change the
      text to the same content which will do no harm. The code that
      stop_machine called must wait for NMIs to finish while it changes
      each location in the kernel. That code may also change the text
      to what the NMI changed it to. The key is that the text will never
      change content while another CPU is executing it.
      
      To make the above work, the call to ftrace_nmi_enter() must also
      do a smp_mb() as well as atomic_inc().  But for applications like
      perf that require a high number of NMIs for profiling, this can have
      a dramatic effect on the system. Not only is it doing a full memory
      barrier on both nmi_enter() as well as nmi_exit() it is also
      modifying a global variable with an atomic operation. This kills
      performance on large SMP machines.
      
      Since the memory barriers are only needed when ftrace is in the
      process of modifying the text (which is seldom), this patch
      adds a "modifying_code" variable that gets set before stop machine
      is executed and cleared afterwards.
      
      The NMIs will check this variable and store it in a per CPU
      "save_modifying_code" variable that it will use to check if it
      needs to do the memory barriers and atomic dec on NMI exit.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0c54dd34
  18. 17 2月, 2010 1 次提交
  19. 03 11月, 2009 1 次提交
  20. 14 10月, 2009 1 次提交
    • F
      tracing: Move syscalls metadata handling from arch to core · c44fc770
      Frederic Weisbecker 提交于
      Most of the syscalls metadata processing is done from arch.
      But these operations are mostly generic accross archs. Especially now
      that we have a common variable name that expresses the number of
      syscalls supported by an arch: NR_syscalls, the only remaining bits
      that need to reside in arch is the syscall nr to addr translation.
      
      v2: Compare syscalls symbols only after the "sys" prefix so that we
          avoid spurious mismatches with archs that have syscalls wrappers,
          in which case syscalls symbols have "SyS" prefixed aliases.
          (Reported by: Heiko Carstens)
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      c44fc770
  21. 12 10月, 2009 1 次提交
  22. 27 8月, 2009 1 次提交
    • J
      tracing: Convert event tracing code to use NR_syscalls · 57421dbb
      Jason Baron 提交于
      Convert the syscalls event tracing code to use NR_syscalls, instead of
      FTRACE_SYSCALL_MAX. NR_syscalls is standard accross most arches, and
      reduces code confusion/complexity.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Josh Stone <jistone@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anwin <hpa@zytor.com>
      Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      LKML-Reference: <9b4f1a84ecae57cc6599412772efa36f0d2b815b.1251146513.git.jbaron@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      57421dbb
  23. 12 8月, 2009 3 次提交
    • J
      tracing: Add individual syscalls tracepoint id support · 64c12e04
      Jason Baron 提交于
      The current state of syscalls tracepoints generates only one event id
      for every syscall events.
      
      This patch associates an id with each syscall trace event, so that we
      can identify each syscall trace event using the 'perf' tool.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      64c12e04
    • J
      tracing: Call arch_init_ftrace_syscalls at boot · 066e0378
      Jason Baron 提交于
      Call arch_init_ftrace_syscalls at boot, so we can determine early the
      set of syscalls for the syscall trace events.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      066e0378
    • J
      tracing: Map syscall name to number · eeac19a7
      Jason Baron 提交于
      Add a new function to support translating a syscall name to number at
      runtime.
      This allows the syscall event tracer to map syscall names to number.
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      eeac19a7
  24. 06 8月, 2009 1 次提交
  25. 19 6月, 2009 1 次提交
    • S
      function-graph: add stack frame test · 71e308a2
      Steven Rostedt 提交于
      In case gcc does something funny with the stack frames, or the return
      from function code, we would like to detect that.
      
      An arch may implement passing of a variable that is unique to the
      function and can be saved on entering a function and can be tested
      when exiting the function. Usually the frame pointer can be used for
      this purpose.
      
      This patch also implements this for x86. Where it passes in the stack
      frame of the parent function, and will test that frame on exit.
      
      There was a case in x86_32 with optimize for size (-Os) where, for a
      few functions, gcc would align the stack frame and place a copy of the
      return address into it. The function graph tracer modified the copy and
      not the actual return address. On return from the funtion, it did not go
      to the tracer hook, but returned to the parent. This broke the function
      graph tracer, because the return of the parent (where gcc did not do
      this funky manipulation) returned to the location that the child function
      was suppose to. This caused strange kernel crashes.
      
      This test detected the problem and pointed out where the issue was.
      
      This modifies the parameters of one of the functions that the arch
      specific code calls, so it includes changes to arch code to accommodate
      the new prototype.
      
      Note, I notice that the parsic arch implements its own push_return_trace.
      This is now a generic function and the ftrace_push_return_trace should be
      used instead. This patch does not touch that code.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      71e308a2
  26. 14 5月, 2009 1 次提交
    • S
      x86/function-graph: fix constraint for recording old return value · aa512a27
      Steven Rostedt 提交于
      After upgrading from gcc 4.2.2 to 4.4.0, the function graph tracer broke.
      Investigating, I found that in the asm that replaces the return value,
      gcc was using the same register for the old value as it was for the
      new value.
      
      	mov	(addr), old
      	mov	new, (addr)
      
      But if old and new are the same register, we clobber new with old!
      I first thought this was a bug in gcc 4.4.0 and reported it:
      
        http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40132
      
      Andrew Pinski responded (quickly), saying that it was correct gcc behavior
      and the code needed to denote old as an "early clobber".
      
      Instead of "=r"(old), we need "=&r"(old).
      
      [Impact: keep function graph tracer from breaking with gcc 4.4.0 ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      aa512a27
  27. 09 4月, 2009 1 次提交
    • F
      tracing/syscalls: use a dedicated file header · 47788c58
      Frederic Weisbecker 提交于
      Impact: fix build warnings and possibe compat misbehavior on IA64
      
      Building a kernel on ia64 might trigger these ugly build warnings:
      
      CC      arch/ia64/ia32/sys_ia32.o
      In file included from arch/ia64/ia32/sys_ia32.c:55:
      arch/ia64/ia32/ia32priv.h:290:1: warning: "elf_check_arch" redefined
      In file included from include/linux/elf.h:7,
                       from include/linux/module.h:14,
                       from include/linux/ftrace.h:8,
                       from include/linux/syscalls.h:68,
                       from arch/ia64/ia32/sys_ia32.c:18:
      arch/ia64/include/asm/elf.h:19:1: warning: this is the location of the previous definition
      [...]
      
      sys_ia32.c includes linux/syscalls.h which in turn includes linux/ftrace.h
      to import the syscalls tracing prototypes.
      
      But including ftrace.h can pull too much things for a low level file,
      especially on ia64 where the ia32 private headers conflict with higher
      level headers.
      
      Now we isolate the syscall tracing headers in their own lightweight file.
      Reported-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: "Frank Ch. Eigler" <fche@redhat.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Jiaying Zhang <jiayingz@google.com>
      Cc: Michael Rubin <mrubin@google.com>
      Cc: Martin Bligh <mbligh@google.com>
      Cc: Michael Davidson <md@google.com>
      LKML-Reference: <20090408184058.GB6017@nowhere>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      47788c58
  28. 07 4月, 2009 1 次提交
  29. 24 3月, 2009 1 次提交
  30. 19 3月, 2009 1 次提交
    • L
      ftrace: protect running nmi (V3) · e9d9df44
      Lai Jiangshan 提交于
      When I review the sensitive code ftrace_nmi_enter(), I found
      the atomic variable nmi_running does protect NMI VS do_ftrace_mod_code(),
      but it can not protects NMI(entered nmi) VS NMI(ftrace_nmi_enter()).
      
      cpu#1                   | cpu#2                 | cpu#3
      ftrace_nmi_enter()      | do_ftrace_mod_code()  |
        not modify            |                       |
      ------------------------|-----------------------|--
      executing               | set mod_code_write = 1|
      executing             --|-----------------------|--------------------
      executing               |                       | ftrace_nmi_enter()
      executing               |                       |    do modify
      ------------------------|-----------------------|-----------------
      ftrace_nmi_exit()       |                       |
      
      cpu#3 may be being modified the code which is still being executed on cpu#1,
      it will have undefined results and possibly take a GPF, this patch
      prevents it occurred.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <49C0B411.30003@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      e9d9df44
  31. 13 3月, 2009 1 次提交
  32. 05 3月, 2009 1 次提交