1. 19 4月, 2017 1 次提交
  2. 14 4月, 2017 1 次提交
    • J
      ftrace/x86: Fix triple fault with graph tracing and suspend-to-ram · 34a477e5
      Josh Poimboeuf 提交于
      On x86-32, with CONFIG_FIRMWARE and multiple CPUs, if you enable function
      graph tracing and then suspend to RAM, it will triple fault and reboot when
      it resumes.
      
      The first fault happens when booting a secondary CPU:
      
      startup_32_smp()
        load_ucode_ap()
          prepare_ftrace_return()
            ftrace_graph_is_dead()
              (accesses 'kill_ftrace_graph')
      
      The early head_32.S code calls into load_ucode_ap(), which has an an
      ftrace hook, so it calls prepare_ftrace_return(), which calls
      ftrace_graph_is_dead(), which tries to access the global
      'kill_ftrace_graph' variable with a virtual address, causing a fault
      because the CPU is still in real mode.
      
      The fix is to add a check in prepare_ftrace_return() to make sure it's
      running in protected mode before continuing.  The check makes sure the
      stack pointer is a virtual kernel address.  It's a bit of a hack, but
      it's not very intrusive and it works well enough.
      
      For reference, here are a few other (more difficult) ways this could
      have potentially been fixed:
      
      - Move startup_32_smp()'s call to load_ucode_ap() down to *after* paging
        is enabled.  (No idea what that would break.)
      
      - Track down load_ucode_ap()'s entire callee tree and mark all the
        functions 'notrace'.  (Probably not realistic.)
      
      - Pause graph tracing in ftrace_suspend_notifier_call() or bringup_cpu()
        or __cpu_up(), and ensure that the pause facility can be queried from
        real mode.
      Reported-by: NPaul Menzel <pmenzel@molgen.mpg.de>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Tested-by: NPaul Menzel <pmenzel@molgen.mpg.de>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>
      Cc: linux-acpi@vger.kernel.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: stable@kernel.org
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/5c1272269a580660703ed2eccf44308e790c7a98.1492123841.git.jpoimboe@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      34a477e5
  3. 30 3月, 2017 1 次提交
    • J
      x86/build: Mostly disable '-maccumulate-outgoing-args' · 3f135e57
      Josh Poimboeuf 提交于
      The GCC '-maccumulate-outgoing-args' flag is enabled for most configs,
      mostly because of issues which are no longer relevant.  For most
      configs, and with most recent versions of GCC, it's no longer needed.
      
      Clarify which cases need it, and only enable it for those cases.  Also
      produce a compile-time error for the ftrace graph + mcount + '-Os' case,
      which will otherwise cause runtime failures.
      
      The main benefit of '-maccumulate-outgoing-args' is that it prevents an
      ugly prologue for functions which have aligned stacks.  But removing the
      option also has some benefits: more readable argument saves, smaller
      text size, and (presumably) slightly improved performance.
      
      Here are the object size savings for 32-bit and 64-bit defconfig
      kernels:
      
            text	   data	    bss	     dec	    hex	filename
        10006710	3543328	1773568	15323606	 e9d1d6	vmlinux.x86-32.before
         9706358	3547424	1773568	15027350	 e54c96	vmlinux.x86-32.after
      
            text	   data	    bss	     dec	    hex	filename
        10652105	4537576	 843776	16033457	 f4a6b1	vmlinux.x86-64.before
        10639629	4537576	 843776	16020981	 f475f5	vmlinux.x86-64.after
      
      That comes out to a 3% text size improvement on x86-32 and a 0.1% text
      size improvement on x86-64.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170316193133.zrj6gug53766m6nn@trebleSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3f135e57
  4. 29 3月, 2017 1 次提交
    • S
      ftrace/x86: Do no run CPU sync when there is only one CPU online · 2b87965a
      Steven Rostedt (VMware) 提交于
      Moving enabling of function tracing to early boot, even before scheduling is
      enabled, means that it is not safe to enable interrupts. When function
      tracing was enabled at boot up, it use to happen after scheduling and the
      other CPUs were brought up. That required running a sync across all CPUs
      when modifying the function hook locations in the code. To do the
      synchronization, interrupts had to be enabled. Now function tracing can be
      started before the other CPUs are brought up, and enabling interrupts in
      that case is dangerous. As only tho boot CPU is active, there is no reason
      to run the synchronization. If the online CPU count is one, do not bother
      doing the synchronization. This removes the need to enable interrupts.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      2b87965a
  5. 10 3月, 2017 1 次提交
  6. 24 8月, 2016 2 次提交
    • J
      ftrace/x86: Implement HAVE_FUNCTION_GRAPH_RET_ADDR_PTR · 471bd10f
      Josh Poimboeuf 提交于
      Use the more reliable version of ftrace_graph_ret_addr() so we no longer
      have to worry about the unwinder getting out of sync with the function
      graph ret_stack index, which can happen if the unwinder skips any frames
      before calling ftrace_graph_ret_addr().
      
      This fixes this issue (and several others like it):
      
        $ cat /proc/self/stack
        [<ffffffff810489a2>] save_stack_trace_tsk+0x22/0x40
        [<ffffffff81311a89>] proc_pid_stack+0xb9/0x110
        [<ffffffff813127c4>] proc_single_show+0x54/0x80
        [<ffffffff812be088>] seq_read+0x108/0x3e0
        [<ffffffff812923d7>] __vfs_read+0x37/0x140
        [<ffffffff812929d9>] vfs_read+0x99/0x140
        [<ffffffff81293f28>] SyS_read+0x58/0xc0
        [<ffffffff818af97c>] entry_SYSCALL_64_fastpath+0x1f/0xbd
        [<ffffffffffffffff>] 0xffffffffffffffff
      
        $ echo function_graph > /sys/kernel/debug/tracing/current_tracer
      
        $ cat /proc/self/stack
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff810394cc>] print_context_stack+0xfc/0x100
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff8103891b>] dump_trace+0x12b/0x350
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff810489a2>] save_stack_trace_tsk+0x22/0x40
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff81311a89>] proc_pid_stack+0xb9/0x110
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff813127c4>] proc_single_show+0x54/0x80
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff812be088>] seq_read+0x108/0x3e0
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff812923d7>] __vfs_read+0x37/0x140
        [<ffffffff818b2428>] return_to_handler+0x0/0x27
        [<ffffffff812929d9>] vfs_read+0x99/0x140
        [<ffffffffffffffff>] 0xffffffffffffffff
      
      Enabling function graph tracing causes the stack trace to change in two
      ways:
      
      First, the real call addresses are confusingly interspersed with
      'return_to_handler' addresses.  This issue will be fixed by the next
      patch.
      
      Second, the stack trace is offset by two frames, because the unwinder
      skipped the first two frames and got out of sync with the ret_stack
      index.  This patch fixes this issue.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a6d623e36f8d08f9a17bd74d804d201177a23afd.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      471bd10f
    • J
      ftrace: Add return address pointer to ftrace_ret_stack · 9a7c348b
      Josh Poimboeuf 提交于
      Storing this value will help prevent unwinders from getting out of sync
      with the function graph tracer ret_stack.  Now instead of needing a
      stateful iterator, they can compare the return address pointer to find
      the right ret_stack entry.
      
      Note that an array of 50 ftrace_ret_stack structs is allocated for every
      task.  So when an arch implements this, it will add either 200 or 400
      bytes of memory usage per task (depending on whether it's a 32-bit or
      64-bit platform).
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9a7c348b
  7. 19 3月, 2016 1 次提交
  8. 22 2月, 2016 1 次提交
  9. 17 2月, 2016 1 次提交
    • B
      x86/ftrace, x86/asm: Kill ftrace_caller_end label · f1b92bb6
      Borislav Petkov 提交于
      One of ftrace_caller_end and ftrace_return is redundant so unify them.
      Rename ftrace_return to ftrace_epilogue to mean that everything after
      that label represents, like an afterword, work which happens *after* the
      ftrace call, e.g., the function graph tracer for one.
      
      Steve wants this to rather mean "[a]n event which reflects meaningfully
      on a recently ended conflict or struggle." I can imagine that ftrace can
      be a struggle sometimes.
      
      Anyway, beef up the comment about the code contents and layout before
      ftrace_epilogue label.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1455612202-14414-4-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f1b92bb6
  10. 05 1月, 2016 1 次提交
  11. 26 11月, 2015 1 次提交
    • S
      ftrace: Add variable ftrace_expected for archs to show expected code · b05086c7
      Steven Rostedt (Red Hat) 提交于
      When an anomaly is found while modifying function code, ftrace_bug() is
      called which disables the function tracing infrastructure and reports
      information about what failed. If the code that is to be replaced does not
      match what is expected, then actual code is shown. Currently there is no
      arch generic way to show what was expected.
      
      Add a new variable pointer calld ftrace_expected that the arch code can set
      to point to what it expected so that ftrace_bug() can report the actual text
      as well as the text that was expected to be there.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b05086c7
  12. 23 10月, 2015 1 次提交
  13. 20 1月, 2015 1 次提交
    • R
      module: remove mod arg from module_free, rename module_memfree(). · be1f221c
      Rusty Russell 提交于
      Nothing needs the module pointer any more, and the next patch will
      call it from RCU, where the module itself might no longer exist.
      Removing the arg is the safest approach.
      
      This just codifies the use of the module_alloc/module_free pattern
      which ftrace and bpf use.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: x86@kernel.org
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: linux-cris-kernel@axis.com
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: sparclinux@vger.kernel.org
      Cc: netdev@vger.kernel.org
      be1f221c
  14. 02 12月, 2014 1 次提交
  15. 20 11月, 2014 1 次提交
    • S
      ftrace/x86/extable: Add is_ftrace_trampoline() function · aec0be2d
      Steven Rostedt (Red Hat) 提交于
      Stack traces that happen from function tracing check if the address
      on the stack is a __kernel_text_address(). That is, is the address
      kernel code. This calls core_kernel_text() which returns true
      if the address is part of the builtin kernel code. It also calls
      is_module_text_address() which returns true if the address belongs
      to module code.
      
      But what is missing is ftrace dynamically allocated trampolines.
      These trampolines are allocated for individual ftrace_ops that
      call the ftrace_ops callback functions directly. But if they do a
      stack trace, the code checking the stack wont detect them as they
      are neither core kernel code nor module address space.
      
      Adding another field to ftrace_ops that also stores the size of
      the trampoline assigned to it we can create a new function called
      is_ftrace_trampoline() that returns true if the address is a
      dynamically allocate ftrace trampoline. Note, it ignores trampolines
      that are not dynamically allocated as they will return true with
      the core_kernel_text() function.
      
      Link: http://lkml.kernel.org/r/20141119034829.497125839@goodmis.org
      
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      aec0be2d
  16. 12 11月, 2014 2 次提交
    • S
      ftrace: Add more information to ftrace_bug() output · 4fd3279b
      Steven Rostedt (Red Hat) 提交于
      With the introduction of the dynamic trampolines, it is useful that if
      things go wrong that ftrace_bug() produces more information about what
      the current state is. This can help debug issues that may arise.
      
      Ftrace has lots of checks to make sure that the state of the system it
      touchs is exactly what it expects it to be. When it detects an abnormality
      it calls ftrace_bug() and disables itself to prevent any further damage.
      It is crucial that ftrace_bug() produces sufficient information that
      can be used to debug the situation.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4fd3279b
    • S
      ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines · 12cce594
      Steven Rostedt (Red Hat) 提交于
      When the static ftrace_ops (like function tracer) enables tracing, and it
      is the only callback that is referencing a function, a trampoline is
      dynamically allocated to the function that calls the callback directly
      instead of calling a loop function that iterates over all the registered
      ftrace ops (if more than one ops is registered).
      
      But when it comes to dynamically allocated ftrace_ops, where they may be
      freed, on a CONFIG_PREEMPT kernel there's no way to know when it is safe
      to free the trampoline. If a task was preempted while executing on the
      trampoline, there's currently no way to know when it will be off that
      trampoline.
      
      But this is not true when it comes to !CONFIG_PREEMPT. The current method
      of calling schedule_on_each_cpu() will force tasks off the trampoline,
      becaues they can not schedule while on it (kernel preemption is not
      configured). That means it is safe to free a dynamically allocated
      ftrace ops trampoline when CONFIG_PREEMPT is not configured.
      
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12cce594
  17. 01 11月, 2014 2 次提交
    • S
      ftrace/x86: Show trampoline call function in enabled_functions · 15d5b02c
      Steven Rostedt (Red Hat) 提交于
      The file /sys/kernel/debug/tracing/eneabled_functions is used to debug
      ftrace function hooks. Add to the output what function is being called
      by the trampoline if the arch supports it.
      
      Add support for this feature in x86_64.
      
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      15d5b02c
    • S
      ftrace/x86: Add dynamic allocated trampoline for ftrace_ops · f3bea491
      Steven Rostedt (Red Hat) 提交于
      The current method of handling multiple function callbacks is to register
      a list function callback that calls all the other callbacks based on
      their hash tables and compare it to the function that the callback was
      called on. But this is very inefficient.
      
      For example, if you are tracing all functions in the kernel and then
      add a kprobe to a function such that the kprobe uses ftrace, the
      mcount trampoline will switch from calling the function trace callback
      to calling the list callback that will iterate over all registered
      ftrace_ops (in this case, the function tracer and the kprobes callback).
      That means for every function being traced it checks the hash of the
      ftrace_ops for function tracing and kprobes, even though the kprobes
      is only set at a single function. The kprobes ftrace_ops is checked
      for every function being traced!
      
      Instead of calling the list function for functions that are only being
      traced by a single callback, we can call a dynamically allocated
      trampoline that calls the callback directly. The function graph tracer
      already uses a direct call trampoline when it is being traced by itself
      but it is not dynamically allocated. It's trampoline is static in the
      kernel core. The infrastructure that called the function graph trampoline
      can also be used to call a dynamically allocated one.
      
      For now, only ftrace_ops that are not dynamically allocated can have
      a trampoline. That is, users such as function tracer or stack tracer.
      kprobes and perf allocate their ftrace_ops, and until there's a safe
      way to free the trampoline, it can not be used. The dynamically allocated
      ftrace_ops may, although, use the trampoline if the kernel is not
      compiled with CONFIG_PREEMPT. But that will come later.
      Tested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Tested-by: NJiri Kosina <jkosina@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f3bea491
  18. 17 7月, 2014 1 次提交
  19. 04 6月, 2014 1 次提交
  20. 14 5月, 2014 3 次提交
    • S
      ftrace: Remove FTRACE_UPDATE_MODIFY_CALL_REGS flag · f1b2f2bd
      Steven Rostedt (Red Hat) 提交于
      As the decision to what needs to be done (converting a call to the
      ftrace_caller to ftrace_caller_regs or to convert from ftrace_caller_regs
      to ftrace_caller) can easily be determined from the rec->flags of
      FTRACE_FL_REGS and FTRACE_FL_REGS_EN, there's no need to have the
      ftrace_check_record() return either a UPDATE_MODIFY_CALL_REGS or a
      UPDATE_MODIFY_CALL. Just he latter is enough. This added flag causes
      more complexity than is required. Remove it.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f1b2f2bd
    • S
      ftrace: Make get_ftrace_addr() and get_ftrace_addr_old() global · 7413af1f
      Steven Rostedt (Red Hat) 提交于
      Move and rename get_ftrace_addr() and get_ftrace_addr_old() to
      ftrace_get_addr_new() and ftrace_get_addr_curr() respectively.
      
      This moves these two helper functions in the generic code out from
      the arch specific code, and renames them to have a better generic
      name. This will allow other archs to use them as well as makes it
      a bit easier to work on getting separate trampolines for different
      functions.
      
      ftrace_get_addr_new() returns the trampoline address that the mcount
      call address will be converted to.
      
      ftrace_get_addr_curr() returns the trampoline address of what the
      mcount call address currently jumps to.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7413af1f
    • S
      ftrace/x86: Get the current mcount addr for add_breakpoint() · 94792ea0
      Steven Rostedt (Red Hat) 提交于
      The add_breakpoint() code in the ftrace updating gets the address
      of what the call will become, but if the mcount address is changing
      from regs to non-regs ftrace_caller or vice versa, it will use what
      the record currently is.
      
      This is rather silly as the code should always use what is currently
      there regardless of if it's changing the regs function or just converting
      to a nop.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      94792ea0
  21. 22 4月, 2014 1 次提交
  22. 07 3月, 2014 4 次提交
  23. 04 3月, 2014 2 次提交
    • P
      ftrace/x86: One more missing sync after fixup of function modification failure · 12729f14
      Petr Mladek 提交于
      If a failure occurs while modifying ftrace function, it bails out and will
      remove the tracepoints to be back to what the code originally was.
      
      There is missing the final sync run across the CPUs after the fix up is done
      and before the ftrace int3 handler flag is reset.
      
      Here's the description of the problem:
      
      	CPU0				CPU1
      	----				----
        remove_breakpoint();
        modifying_ftrace_code = 0;
      
      				[still sees breakpoint]
      				<takes trap>
      				[sees modifying_ftrace_code as zero]
      				[no breakpoint handler]
      				[goto failed case]
      				[trap exception - kernel breakpoint, no
      				 handler]
      				BUG()
      
      Link: http://lkml.kernel.org/r/1393258342-29978-2-git-send-email-pmladek@suse.cz
      
      Fixes: 8a4d0a68 "ftrace: Use breakpoint method to update ftrace caller"
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      12729f14
    • S
      ftrace/x86: Run a sync after fixup on failure · c932c6b7
      Steven Rostedt (Red Hat) 提交于
      If a failure occurs while enabling a trace, it bails out and will remove
      the tracepoints to be back to what the code originally was. But the fix
      up had some bugs in it. By injecting a failure in the code, the fix up
      ran to completion, but shortly afterward the system rebooted.
      
      There was two bugs here.
      
      The first was that there was no final sync run across the CPUs after the
      fix up was done, and before the ftrace int3 handler flag was reset. That
      means that other CPUs could still see the breakpoint and trigger on it
      long after the flag was cleared, and the int3 handler would think it was
      a spurious interrupt. Worse yet, the int3 handler could hit other breakpoints
      because the ftrace int3 handler flag would have prevented the int3 handler
      from going further.
      
      Here's a description of the issue:
      
      	CPU0				CPU1
      	----				----
        remove_breakpoint();
        modifying_ftrace_code = 0;
      
      				[still sees breakpoint]
      				<takes trap>
      				[sees modifying_ftrace_code as zero]
      				[no breakpoint handler]
      				[goto failed case]
      				[trap exception - kernel breakpoint, no
      				 handler]
      				BUG()
      
      The second bug was that the removal of the breakpoints required the
      "within()" logic updates instead of accessing the ip address directly.
      As the kernel text is mapped read-only when CONFIG_DEBUG_RODATA is set, and
      the removal of the breakpoint is a modification of the kernel text.
      The ftrace_write() includes the "within()" logic, where as, the
      probe_kernel_write() does not. This prevented the breakpoint from being
      removed at all.
      
      Link: http://lkml.kernel.org/r/1392650573-3390-1-git-send-email-pmladek@suse.czReported-by: NPetr Mladek <pmladek@suse.cz>
      Tested-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c932c6b7
  24. 12 2月, 2014 1 次提交
    • S
      ftrace/x86: Use breakpoints for converting function graph caller · 87fbb2ac
      Steven Rostedt (Red Hat) 提交于
      When the conversion was made to remove stop machine and use the breakpoint
      logic instead, the modification of the function graph caller is still
      done directly as though it was being done under stop machine.
      
      As it is not converted via stop machine anymore, there is a possibility
      that the code could be layed across cache lines and if another CPU is
      accessing that function graph call when it is being updated, it could
      cause a General Protection Fault.
      
      Convert the update of the function graph caller to use the breakpoint
      method as well.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: stable@vger.kernel.org # 3.5+
      Fixes: 08d636b6 "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine"
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87fbb2ac
  25. 06 11月, 2013 1 次提交
  26. 17 11月, 2012 1 次提交
  27. 20 7月, 2012 2 次提交
    • S
      ftrace/x86: Add save_regs for i386 function calls · 4de72395
      Steven Rostedt 提交于
      Add saving full regs for function tracing on i386.
      The saving of regs was influenced by patches sent out by
      Masami Hiramatsu.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.379060003@goodmis.orgReviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4de72395
    • S
      ftrace/x86: Add separate function to save regs · 08f6fba5
      Steven Rostedt 提交于
      Add a way to have different functions calling different trampolines.
      If a ftrace_ops wants regs saved on the return, then have only the
      functions with ops registered to save regs. Functions registered by
      other ops would not be affected, unless the functions overlap.
      
      If one ftrace_ops registered functions A, B and C and another ops
      registered fucntions to save regs on A, and D, then only functions
      A and D would be saving regs. Function B and C would work as normal.
      Although A is registered by both ops: normal and saves regs; this is fine
      as saving the regs is needed to satisfy one of the ops that calls it
      but the regs are ignored by the other ops function.
      
      x86_64 implements the full regs saving, and i386 just passes a NULL
      for regs to satisfy the ftrace_ops passing. Where an arch must supply
      both regs and ftrace_ops parameters, even if regs is just NULL.
      
      It is OK for an arch to pass NULL regs. All function trace users that
      require regs passing must add the flag FTRACE_OPS_FL_SAVE_REGS when
      registering the ftrace_ops. If the arch does not support saving regs
      then the ftrace_ops will fail to register. The flag
      FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED may be set that will prevent the
      ftrace_ops from failing to register. In this case, the handler may
      either check if regs is not NULL or check if ARCH_SUPPORTS_FTRACE_SAVE_REGS.
      If the arch supports passing regs it will set this macro and pass regs
      for ops that request them. All other archs will just pass NULL.
      
      Link: Link: http://lkml.kernel.org/r/20120711195745.107705970@goodmis.org
      
      Cc: Alexander van Heukelum <heukelum@fastmail.fm>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      08f6fba5
  28. 01 6月, 2012 2 次提交
    • S
      ftrace: Use breakpoint method to update ftrace caller · 8a4d0a68
      Steven Rostedt 提交于
      On boot up and module load, it is fine to modify the code directly,
      without the use of breakpoints. This is because boot up modification
      is done before SMP is initialized, thus the modification is serial,
      and module load is done before the module executes.
      
      But after that we must use a SMP safe method to modify running code.
      Otherwise, if we are running the function tracer and update its
      function (by starting off the stack tracer, or perf tracing)
      the change of the function called by the ftrace trampoline is done
      directly. If this is being executed on another CPU, that CPU may
      take a GPF and crash the kernel.
      
      The breakpoint method is used to change the nops at all the functions, but
      the change of the ftrace callback handler itself was still using a
      direct modification. If tracing was enabled and the function callback
      was changed then another CPU could fault if it was currently calling
      the original callback. This modification must use the breakpoint method
      too.
      
      Note, the direct method is still used for boot up and module load.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8a4d0a68
    • S
      ftrace: Synchronize variable setting with breakpoints · a192cd04
      Steven Rostedt 提交于
      When the function tracer starts modifying the code via breakpoints
      it sets a variable (modifying_ftrace_code) to inform the breakpoint
      handler to call the ftrace int3 code.
      
      But there's no synchronization between setting this code and the
      handler, thus it is possible for the handler to be called on another
      CPU before it sees the variable. This will cause a kernel crash as
      the int3 handler will not know what to do with it.
      
      I originally added smp_mb()'s to force the visibility of the variable
      but H. Peter Anvin suggested that I just make it atomic.
      
      [ Added comments as suggested by Peter Zijlstra ]
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a192cd04
  29. 17 5月, 2012 1 次提交