1. 02 4月, 2022 1 次提交
  2. 18 3月, 2022 1 次提交
  3. 12 3月, 2022 1 次提交
    • S
      tracing: Add snapshot at end of kernel boot up · 380af29b
      Steven Rostedt (Google) 提交于
      Add ftrace_boot_snapshot kernel parameter that will take a snapshot at the
      end of boot up just before switching over to user space (it happens during
      the kernel freeing of init memory).
      
      This is useful when there's interesting data that can be collected from
      kernel start up, but gets overridden by user space start up code. With
      this option, the ring buffer content from the boot up traces gets saved in
      the snapshot at the end of boot up. This trace can be read from:
      
       /sys/kernel/tracing/snapshot
      Signed-off-by: NSteven Rostedt (Google) <rostedt@goodmis.org>
      380af29b
  4. 22 10月, 2021 2 次提交
  5. 21 10月, 2021 1 次提交
  6. 20 10月, 2021 1 次提交
  7. 03 8月, 2021 1 次提交
  8. 24 3月, 2021 1 次提交
  9. 10 2月, 2021 1 次提交
  10. 14 11月, 2020 3 次提交
    • S
      livepatch: Use the default ftrace_ops instead of REGS when ARGS is available · 2860cd8a
      Steven Rostedt (VMware) 提交于
      When CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS is available, the ftrace call
      will be able to set the ip of the calling function. This will improve the
      performance of live kernel patching where it does not need all the regs to
      be stored just to change the instruction pointer.
      
      If all archs that support live kernel patching also support
      HAVE_DYNAMIC_FTRACE_WITH_ARGS, then the architecture specific function
      klp_arch_set_pc() could be made generic.
      
      It is possible that an arch can support HAVE_DYNAMIC_FTRACE_WITH_ARGS but
      not HAVE_DYNAMIC_FTRACE_WITH_REGS and then have access to live patching.
      
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: live-patching@vger.kernel.org
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      2860cd8a
    • S
      ftrace/x86: Allow for arguments to be passed in to ftrace_regs by default · 02a474ca
      Steven Rostedt (VMware) 提交于
      Currently, the only way to get access to the registers of a function via a
      ftrace callback is to set the "FL_SAVE_REGS" bit in the ftrace_ops. But as this
      saves all regs as if a breakpoint were to trigger (for use with kprobes), it
      is expensive.
      
      The regs are already saved on the stack for the default ftrace callbacks, as
      that is required otherwise a function being traced will get the wrong
      arguments and possibly crash. And on x86, the arguments are already stored
      where they would be on a pt_regs structure to use that code for both the
      regs version of a callback, it makes sense to pass that information always
      to all functions.
      
      If an architecture does this (as x86_64 now does), it is to set
      HAVE_DYNAMIC_FTRACE_WITH_ARGS, and this will let the generic code that it
      could have access to arguments without having to set the flags.
      
      This also includes having the stack pointer being saved, which could be used
      for accessing arguments on the stack, as well as having the function graph
      tracer not require its own trampoline!
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      02a474ca
    • S
      ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs · d19ad077
      Steven Rostedt (VMware) 提交于
      In preparation to have arguments of a function passed to callbacks attached
      to functions as default, change the default callback prototype to receive a
      struct ftrace_regs as the forth parameter instead of a pt_regs.
      
      For callbacks that set the FL_SAVE_REGS flag in their ftrace_ops flags, they
      will now need to get the pt_regs via a ftrace_get_regs() helper call. If
      this is called by a callback that their ftrace_ops did not have a
      FL_SAVE_REGS flag set, it that helper function will return NULL.
      
      This will allow the ftrace_regs to hold enough just to get the parameters
      and stack pointer, but without the worry that callbacks may have a pt_regs
      that is not completely filled.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      d19ad077
  11. 11 11月, 2020 1 次提交
  12. 06 11月, 2020 2 次提交
  13. 09 10月, 2020 2 次提交
  14. 20 9月, 2020 1 次提交
  15. 19 9月, 2020 1 次提交
  16. 15 6月, 2020 1 次提交
  17. 08 6月, 2020 1 次提交
  18. 13 5月, 2020 1 次提交
    • S
      x86/ftrace: Have ftrace trampolines turn read-only at the end of system boot up · 59566b0b
      Steven Rostedt (VMware) 提交于
      Booting one of my machines, it triggered the following crash:
      
       Kernel/User page tables isolation: enabled
       ftrace: allocating 36577 entries in 143 pages
       Starting tracer 'function'
       BUG: unable to handle page fault for address: ffffffffa000005c
       #PF: supervisor write access in kernel mode
       #PF: error_code(0x0003) - permissions violation
       PGD 2014067 P4D 2014067 PUD 2015063 PMD 7b253067 PTE 7b252061
       Oops: 0003 [#1] PREEMPT SMP PTI
       CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-test+ #24
       Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
       RIP: 0010:text_poke_early+0x4a/0x58
       Code: 34 24 48 89 54 24 08 e8 bf 72 0b 00 48 8b 34 24 48 8b 4c 24 08 84 c0 74 0b 48 89 df f3 a4 48 83 c4 10 5b c3 9c 58 fa 48 89 df <f3> a4 50 9d 48 83 c4 10 5b e9 d6 f9 ff ff
      0 41 57 49
       RSP: 0000:ffffffff82003d38 EFLAGS: 00010046
       RAX: 0000000000000046 RBX: ffffffffa000005c RCX: 0000000000000005
       RDX: 0000000000000005 RSI: ffffffff825b9a90 RDI: ffffffffa000005c
       RBP: ffffffffa000005c R08: 0000000000000000 R09: ffffffff8206e6e0
       R10: ffff88807b01f4c0 R11: ffffffff8176c106 R12: ffffffff8206e6e0
       R13: ffffffff824f2440 R14: 0000000000000000 R15: ffffffff8206eac0
       FS:  0000000000000000(0000) GS:ffff88807d400000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: ffffffffa000005c CR3: 0000000002012000 CR4: 00000000000006b0
       Call Trace:
        text_poke_bp+0x27/0x64
        ? mutex_lock+0x36/0x5d
        arch_ftrace_update_trampoline+0x287/0x2d5
        ? ftrace_replace_code+0x14b/0x160
        ? ftrace_update_ftrace_func+0x65/0x6c
        __register_ftrace_function+0x6d/0x81
        ftrace_startup+0x23/0xc1
        register_ftrace_function+0x20/0x37
        func_set_flag+0x59/0x77
        __set_tracer_option.isra.19+0x20/0x3e
        trace_set_options+0xd6/0x13e
        apply_trace_boot_options+0x44/0x6d
        register_tracer+0x19e/0x1ac
        early_trace_init+0x21b/0x2c9
        start_kernel+0x241/0x518
        ? load_ucode_intel_bsp+0x21/0x52
        secondary_startup_64+0xa4/0xb0
      
      I was able to trigger it on other machines, when I added to the kernel
      command line of both "ftrace=function" and "trace_options=func_stack_trace".
      
      The cause is the "ftrace=function" would register the function tracer
      and create a trampoline, and it will set it as executable and
      read-only. Then the "trace_options=func_stack_trace" would then update
      the same trampoline to include the stack tracer version of the function
      tracer. But since the trampoline already exists, it updates it with
      text_poke_bp(). The problem is that text_poke_bp() called while
      system_state == SYSTEM_BOOTING, it will simply do a memcpy() and not
      the page mapping, as it would think that the text is still read-write.
      But in this case it is not, and we take a fault and crash.
      
      Instead, lets keep the ftrace trampolines read-write during boot up,
      and then when the kernel executable text is set to read-only, the
      ftrace trampolines get set to read-only as well.
      
      Link: https://lkml.kernel.org/r/20200430202147.4dc6e2de@oasis.local.home
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: stable@vger.kernel.org
      Fixes: 768ae440 ("x86/ftrace: Use text_poke()")
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      59566b0b
  19. 27 4月, 2020 1 次提交
  20. 11 12月, 2019 1 次提交
  21. 23 11月, 2019 1 次提交
  22. 21 11月, 2019 1 次提交
  23. 19 11月, 2019 1 次提交
  24. 15 11月, 2019 1 次提交
  25. 13 11月, 2019 4 次提交
    • S
      ftrace/x86: Add a counter to test function_graph with direct · a3ad1a7e
      Steven Rostedt (VMware) 提交于
      As testing for direct calls from the function graph tracer adds a little
      overhead (which is a lot when tracing every function), add a counter that
      can be used to test if function_graph tracer needs to test for a direct
      caller or not.
      
      It would have been nicer if we could use a static branch, but the static
      branch logic fails when used within the function graph tracer trampoline.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      a3ad1a7e
    • S
      ftrace/x86: Add register_ftrace_direct() for custom trampolines · 562955fe
      Steven Rostedt (VMware) 提交于
      Enable x86 to allow for register_ftrace_direct(), where a custom trampoline
      may be called directly from an ftrace mcount/fentry location.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      562955fe
    • S
      ftrace: Add ftrace_find_direct_func() · 013bf0da
      Steven Rostedt (VMware) 提交于
      As function_graph tracer modifies the return address to insert a trampoline
      to trace the return of a function, it must be aware of a direct caller, as
      when it gets called, the function's return address may not be at on the
      stack where it expects. It may have to see if that return address points to
      the a direct caller and adjust if it is.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      013bf0da
    • S
      ftrace: Add register_ftrace_direct() · 763e34e7
      Steven Rostedt (VMware) 提交于
      Add the start of the functionality to allow other trampolines to use the
      ftrace mcount/fentry/nop location. This adds two new functions:
      
       register_ftrace_direct() and unregister_ftrace_direct()
      
      Both take two parameters: the first is the instruction address of where the
      mcount/fentry/nop exists, and the second is the trampoline to have that
      location called.
      
      This will handle cases where ftrace is already used on that same location,
      and will make it still work, where the registered direct called trampoline
      will get called after all the registered ftrace callers are handled.
      
      Currently, it will not allow for IP_MODIFY functions to be called at the
      same locations, which include some kprobes and live kernel patching.
      
      At this point, no architecture supports this. This is only the start of
      implementing the framework.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      763e34e7
  26. 06 11月, 2019 2 次提交
    • M
      module/ftrace: handle patchable-function-entry · a1326b17
      Mark Rutland 提交于
      When using patchable-function-entry, the compiler will record the
      callsites into a section named "__patchable_function_entries" rather
      than "__mcount_loc". Let's abstract this difference behind a new
      FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this
      explicitly (e.g. with custom module linker scripts).
      
      As parisc currently handles this explicitly, it is fixed up accordingly,
      with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is
      only defined when DYNAMIC_FTRACE is selected, the parisc module loading
      code is updated to only use the definition in that case. When
      DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
      this removes some redundant work in that case.
      
      To make sure that this is keep up-to-date for modules and the main
      kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery
      simplified for legibility.
      
      I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
      and verified that the section made it into the .ko files for modules.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NHelge Deller <deller@gmx.de>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NSven Schnelle <svens@stackframe.org>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: linux-parisc@vger.kernel.org
      a1326b17
    • M
      ftrace: add ftrace_init_nop() · fbf6c73c
      Mark Rutland 提交于
      Architectures may need to perform special initialization of ftrace
      callsites, and today they do so by special-casing ftrace_make_nop() when
      the expected branch address is MCOUNT_ADDR. In some cases (e.g. for
      patchable-function-entry), we don't have an mcount-like symbol and don't
      want a synthetic MCOUNT_ADDR, but we may need to perform some
      initialization of callsites.
      
      To make it possible to separate initialization from runtime
      modification, and to handle cases without an mcount-like symbol, this
      patch adds an optional ftrace_init_nop() function that architectures can
      implement, which does not pass a branch address.
      
      Where an architecture does not provide ftrace_init_nop(), we will fall
      back to the existing behaviour of calling ftrace_make_nop() with
      MCOUNT_ADDR.
      
      At the same time, ftrace_code_disable() is renamed to
      ftrace_nop_initialize() to make it clearer that it is intended to
      intialize a callsite into a disabled state, and is not for disabling a
      callsite that has been runtime enabled. The kerneldoc description of rec
      arguments is updated to cover non-mcount callsites.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NSven Schnelle <svens@stackframe.org>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      fbf6c73c
  27. 04 11月, 2019 1 次提交
  28. 26 5月, 2019 1 次提交
  29. 30 4月, 2019 1 次提交
  30. 29 4月, 2019 1 次提交
    • T
      tracing: Cleanup stack trace code · 3d9a8072
      Thomas Gleixner 提交于
      - Remove the extra array member of stack_dump_trace[] along with the
        ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries.
      
        Both are historical leftovers of no value. The stack tracer never exceeds
        the array and there is no extra storage requirement either.
      
      - Make variables which are only used in trace_stack.c static.
      
      - Simplify the enable/disable logic.
      
      - Rename stack_trace_print() as it's using the stack_trace_ namespace. Free
        the name up for stack trace related functions.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: linux-mm@kvack.org
      Cc: David Rientjes <rientjes@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: iommu@lists.linux-foundation.org
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: linux-btrfs@vger.kernel.org
      Cc: dm-devel@redhat.com
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: intel-gfx@lists.freedesktop.org
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Link: https://lkml.kernel.org/r/20190425094801.230654524@linutronix.de
      3d9a8072
  31. 11 12月, 2018 1 次提交