- 28 3月, 2018 7 次提交
-
-
由 Thomas Richter 提交于
Add CPU measurement counter facility event description files (json files) for IBM z13. Signed-off-by: NThomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Link: http://lkml.kernel.org/r/20180326082538.2258-4-tmricht@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Richter 提交于
Add CPU measurement counter facility event description files (json files) for IBM zEC12 and zBC12. Signed-off-by: NThomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Link: http://lkml.kernel.org/r/20180326082538.2258-3-tmricht@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Richter 提交于
Add CPU measurement counter facility event description files (json files) for IBM z196. Signed-off-by: NThomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Link: http://lkml.kernel.org/r/20180326082538.2258-2-tmricht@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Richter 提交于
Add CPU measurement counter facility event description files (JSON files) for IBM z10EC and z10BC. Signed-off-by: NThomas Richter <tmricht@linux.vnet.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Link: http://lkml.kernel.org/r/20180326082538.2258-1-tmricht@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
The previous patch is insufficient to cure the reported 'perf trace' segfault, as it only cures the perf_mmap__read_done() case, moving the segfault to perf_mmap__read_init() functio, fix it by doing the same refcount check. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Fixes: 8872481b ("perf mmap: Introduce perf_mmap__read_init()") Link: https://lkml.kernel.org/r/20180326144127.GF18897@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Kan Liang 提交于
There is a segmentation fault when running 'perf trace'. For example: [root@jouet e]# perf trace -e *chdir -o /tmp/bla perf report --ignore-vmlinux -i ../perf.data The perf_mmap__consume() could unmap the mmap. It needs to check the refcnt in perf_mmap__read_done(). Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Fixes: ee023de0 ("perf mmap: Introduce perf_mmap__read_done()") Link: http://lkml.kernel.org/r/1522071729-16776-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jiri Olsa 提交于
Currently the "opts" variable is not zero-ed and we keep on adding to it, ending up with: $ check-headers.sh 2>&1 + opts=' "-B"' + opts=' "-B" "-B"' + opts=' "-B" "-B" "-B"' + opts=' "-B" "-B" "-B" "-B"' + opts=' "-B" "-B" "-B" "-B" "-B"' + opts=' "-B" "-B" "-B" "-B" "-B" "-B"' Fix this by initializing it in the check() function, right before starting the loop. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180321140515.2252-1-jolsa@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 27 3月, 2018 1 次提交
-
-
由 Davidlohr Bueso 提交于
No changes in refcount semantics -- use DEFINE_STATIC_KEY_FALSE() for initialization and replace: static_key_slow_inc|dec() => static_branch_inc|dec() static_key_false() => static_branch_unlikely() Added a '_key' suffix to rdpmc_always_available, for better self-documentation. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: akpm@linux-foundation.org Link: http://lkml.kernel.org/r/20180326210929.5244-5-dave@stgolabs.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 3月, 2018 1 次提交
-
-
由 Ingo Molnar 提交于
Merge tag 'perf-core-for-mingo-4.17-20180323' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: - Move non-TUI specific annotation routines out of the TUI browser so that it can be used in other UIs, and to demonstrate that introduce a 'perf annotate --stdio2' option that will apply those formatting routines to provide a non-interactive annotation mode (Arnaldo Carvalho de Melo) - Add 'P' hotkey to the annotation TUI, so dump the current annotated symbol to a file, easing report thru e-mail, by getting rid of the spaces + right hand side scrollbar chars (Arnaldo Carvalho de Melo) - Support --ignore-vmlinux to 'perf report' and 'perf annotate', that was already present in 'perf top', to use /proc/{kcore,kallsyms}, allowing to see what is in fact running (patched stuff, alternatives, ftrace, etc), not the initial state of the kernel (vmlinux) (Arnaldo Carvalho de Melo) - Support 'jump' instructions to a different function, treating them as 'call' instructions (Arnaldo Carvalho de Melo) - Fix some jump artifacts when using vmlinux + ASM functions, where the ELF symtab for instance, for entry_SYSCALL_64 includes that and what comes after the 'syscall_return_via_sysret' label, but the objdump -dS prints the jump targets + offsets using the syscall_return_via_sysret address, which was confusing 'perf annotate'. See the cset comments for further info (Arnaldo Carvalho de Melo) - Report error from dwfl_attach_state() in the unwind code (Martin Vuille) - Reference Py_None before returning it in the python extension (Petr Machata) Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 3月, 2018 7 次提交
-
-
由 Ingo Molnar 提交于
With the cherry-picked perf/urgent commit merged separately we can now merge all the fixes without conflicts. Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
Pick up a cherry-picked commit. Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Arnaldo Carvalho de Melo 提交于
These types of jumps were confusing the annotate browser: entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux Percent│ffffffff81a00020: swapgs <SNIP> │ffffffff81a00128: ↓ jae ffffffff81a00139 <syscall_return_via_sysret+0x53> <SNIP> │ffffffff81a00155: → jmpq *0x825d2d(%rip) # ffffffff82225e88 <pv_cpu_ops+0xe8> I.e. the syscall_return_via_sysret function is actually "inside" the entry_SYSCALL_64 function, and the offsets in jumps like these (+0x53) are relative to syscall_return_via_sysret, not to syscall_return_via_sysret. Or this may be some artifact in how the assembler marks the start and end of a function and how this ends up in the ELF symtab for vmlinux, i.e. syscall_return_via_sysret() isn't "inside" entry_SYSCALL_64, but just right after it. From readelf -sw vmlinux: 80267: ffffffff81a00020 315 NOTYPE GLOBAL DEFAULT 1 entry_SYSCALL_64 316: ffffffff81a000e6 0 NOTYPE LOCAL DEFAULT 1 syscall_return_via_sysret 0xffffffff81a00020 + 315 > 0xffffffff81a000e6 So instead of looking for offsets after that last '+' sign, calculate offsets for jump target addresses that are inside the function being disassembled from the absolute address, 0xffffffff81a00139 in this case, subtracting from it the objdump address for the start of the function being disassembled, entry_SYSCALL_64() in this case. So, before this patch: entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux Percent│ pop %r10 │ pop %r9 │ pop %r8 │ pop %rax │ pop %rsi │ pop %rdx │ pop %rsi │ mov %rsp,%rdi │ mov %gs:0x5004,%rsp │ pushq 0x28(%rdi) │ pushq (%rdi) │ push %rax │ ↑ jmp 6c │ mov %cr3,%rdi │ ↑ jmp 62 │ mov %rdi,%rax │ and $0x7ff,%rdi │ bt %rdi,%gs:0x2219a │ ↑ jae 53 │ btr %rdi,%gs:0x2219a │ mov %rax,%rdi │ ↑ jmp 5b After: entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux 0.65 │ → jne swapgs_restore_regs_and_return_to_usermode │ pop %r10 │ pop %r9 │ pop %r8 │ pop %rax │ pop %rsi │ pop %rdx │ pop %rsi │ mov %rsp,%rdi │ mov %gs:0x5004,%rsp │ pushq 0x28(%rdi) │ pushq (%rdi) │ push %rax │ ↓ jmp 132 │ mov %cr3,%rdi │ ┌──jmp 128 │ │ mov %rdi,%rax │ │ and $0x7ff,%rdi │ │ bt %rdi,%gs:0x2219a │ │↓ jae 119 │ │ btr %rdi,%gs:0x2219a │ │ mov %rax,%rdi │ │↓ jmp 121 │119:│ mov %rax,%rdi │ │ bts $0x3f,%rdi │121:│ or $0x800,%rdi │128:└─→or $0x1000,%rdi │ mov %rdi,%cr3 │132: pop %rax │ pop %rdi │ pop %rsp │ → jmpq *0x825d2d(%rip) # ffffffff82225e88 <pv_cpu_ops+0xe8> With those at least navigating to the right destination, an improvement for these cases seems to be to be to somehow mark those inner functions, which in this case could be: entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux │syscall_return_via_sysret: │ pop %r15 │ pop %r14 │ pop %r13 │ pop %r12 │ pop %rbp │ pop %rbx │ pop %rsi │ pop %r10 │ pop %r9 │ pop %r8 │ pop %rax │ pop %rsi │ pop %rdx │ pop %rsi │ mov %rsp,%rdi │ mov %gs:0x5004,%rsp │ pushq 0x28(%rdi) │ pushq (%rdi) │ push %rax │ ↓ jmp 132 │ mov %cr3,%rdi │ ┌──jmp 128 │ │ mov %rdi,%rax │ │ and $0x7ff,%rdi │ │ bt %rdi,%gs:0x2219a │ │↓ jae 119 │ │ btr %rdi,%gs:0x2219a │ │ mov %rax,%rdi │ │↓ jmp 121 │119:│ mov %rax,%rdi │ │ bts $0x3f,%rdi │121:│ or $0x800,%rdi │128:└─→or $0x1000,%rdi │ mov %rdi,%cr3 │132: pop %rax │ pop %rdi │ pop %rsp │ → jmpq *0x825d2d(%rip) # ffffffff82225e88 <pv_cpu_ops+0xe8> This all gets much better viewed if one uses 'perf report --ignore-vmlinux' forcing the usage of /proc/kcore + /proc/kallsyms, when the above actually gets down to: # perf report --ignore-vmlinux ## do '/64', will show the function names containing '64', ## navigate to /entry_SYSCALL_64_after_hwframe.annotation, ## press 'A' to annotate, then 'P' to print that annotation ## to a file ## From another xterm (or see on screen, this 'P' thing is for ## getting rid of those right side scroll bars/spaces): # cat /entry_SYSCALL_64_after_hwframe.annotation entry_SYSCALL_64_after_hwframe() /proc/kcore Event: cycles:ppp Percent Disassembly of section load0: ffffffff9aa00044 <load0>: 11.97 push %rax 4.85 push %rdi push %rsi 2.59 push %rdx 2.27 push %rcx 0.32 pushq $0xffffffffffffffda 1.29 push %r8 xor %r8d,%r8d 1.62 push %r9 0.65 xor %r9d,%r9d 1.62 push %r10 xor %r10d,%r10d 5.50 push %r11 xor %r11d,%r11d 3.56 push %rbx xor %ebx,%ebx 4.21 push %rbp xor %ebp,%ebp 2.59 push %r12 0.97 xor %r12d,%r12d 3.24 push %r13 xor %r13d,%r13d 2.27 push %r14 xor %r14d,%r14d 4.21 push %r15 xor %r15d,%r15d 0.97 mov %rsp,%rdi 5.50 → callq do_syscall_64 14.56 mov 0x58(%rsp),%rcx 7.44 mov 0x80(%rsp),%r11 0.32 cmp %rcx,%r11 → jne swapgs_restore_regs_and_return_to_usermode 0.32 shl $0x10,%rcx 0.32 sar $0x10,%rcx 3.24 cmp %rcx,%r11 → jne swapgs_restore_regs_and_return_to_usermode 2.27 cmpq $0x33,0x88(%rsp) 1.29 → jne swapgs_restore_regs_and_return_to_usermode mov 0x30(%rsp),%r11 8.74 cmp %r11,0x90(%rsp) → jne swapgs_restore_regs_and_return_to_usermode 0.32 test $0x10100,%r11 → jne swapgs_restore_regs_and_return_to_usermode 0.32 cmpq $0x2b,0xa0(%rsp) 0.65 → jne swapgs_restore_regs_and_return_to_usermode I.e. using kallsyms makes the function start/end be done differently than using what is in the vmlinux ELF symtab and actually the hits goes to entry_SYSCALL_64_after_hwframe, which is a GLOBAL() after the start of entry_SYSCALL_64: ENTRY(entry_SYSCALL_64) UNWIND_HINT_EMPTY <SNIP> pushq $__USER_CS /* pt_regs->cs */ pushq %rcx /* pt_regs->ip */ GLOBAL(entry_SYSCALL_64_after_hwframe) pushq %rax /* pt_regs->orig_ax */ PUSH_AND_CLEAR_REGS rax=$-ENOSYS And it goes and ends at: cmpq $__USER_DS, SS(%rsp) /* SS must match SYSRET */ jne swapgs_restore_regs_and_return_to_usermode /* * We win! This label is here just for ease of understanding * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: /* rcx and r11 are already restored (see code above) */ UNWIND_HINT_EMPTY POP_REGS pop_rdi=0 skip_r11rcx=1 So perhaps some people should really just play with '--ignore-vmlinux' to force /proc/kcore + kallsyms. One idea is to do both, i.e. have a vmlinux annotation and a kcore+kallsyms one, when possible, and even show the patched location, etc. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-r11knxv8voesav31xokjiuo6@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
That strchr() in jump__scnprintf() needs to be nuked somehow, as it, IIRC is already done in jump__parse() and if needed at scnprintf() time, should be stashed in the struct filled in parse() time. For now jus defer it to just before where it is used. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-j0t5hagnphoz9xw07bh3ha3g@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
For instance: entry_SYSCALL_64 /lib/modules/4.16.0-rc5-00086-gdf09348f/build/vmlinux 5.50 │ → callq do_syscall_64 14.56 │ mov 0x58(%rsp),%rcx 7.44 │ mov 0x80(%rsp),%r11 0.32 │ cmp %rcx,%r11 │ → jne swapgs_restore_regs_and_return_to_usermode 0.32 │ shl $0x10,%rcx 0.32 │ sar $0x10,%rcx 3.24 │ cmp %rcx,%r11 │ → jne swapgs_restore_regs_and_return_to_usermode 2.27 │ cmpq $0x33,0x88(%rsp) 1.29 │ → jne swapgs_restore_regs_and_return_to_usermode │ mov 0x30(%rsp),%r11 8.74 │ cmp %r11,0x90(%rsp) │ → jne swapgs_restore_regs_and_return_to_usermode 0.32 │ test $0x10100,%r11 │ → jne swapgs_restore_regs_and_return_to_usermode 0.32 │ cmpq $0x2b,0xa0(%rsp) 0.65 │ → jne swapgs_restore_regs_and_return_to_usermode It'll behave just like a "call" instruction, i.e. press enter or right arrow over one such line and the browser will navigate to the annotated disassembly of that function, which when exited, via left arrow or esc, will come back to the calling function. Now to support jump to an offset on a different function... Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-78o508mqvr8inhj63ddtw7mo@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Because they all really check if we can access data structures/visual constructs where a "jump" instruction targets code in the same function, i.e. things like: __pthread_mutex_lock /usr/lib64/libpthread-2.26.so 1.95 │ mov __pthread_force_elision,%ecx │ ┌──test %ecx,%ecx 0.07 │ ├──je 60 │ │ test $0x300,%esi │ │↓ jne 60 │ │ or $0x100,%esi │ │ mov %esi,0x10(%rdi) │ 42:│ mov %esi,%edx │ │ lea 0x16(%r8),%rsi │ │ mov %r8,%rdi │ │ and $0x80,%edx │ │ add $0x8,%rsp │ │→ jmpq __lll_lock_elision │ │ nop 0.29 │ 60:└─→and $0x80,%esi 0.07 │ mov $0x1,%edi 0.29 │ xor %eax,%eax 2.53 │ lock cmpxchg %edi,(%r8) And not things like that "jmpq __lll_lock_elision", that instead should behave like a "call" instruction and "jump" to the disassembly of "___lll_lock_elision". Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-3cwx39u3h66dfw9xjrlt7ca2@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Petr Machata 提交于
Python None objects are handled just like all the other objects with respect to their reference counting. Before returning Py_None, its reference count thus needs to be bumped. Signed-off-by: NPetr Machata <petrm@mellanox.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Machata <petrm@mellanox.com> Link: http://lkml.kernel.org/r/b1e565ecccf68064d8d54f37db5d028dda8fa522.1521675563.git.petrm@mellanox.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 22 3月, 2018 2 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Things like this in _cpp_lex_token (gcc's cc1 program): cpp_named_operator2name@@base+0xa72 Point to a place that is after the cpp_named_operator2name boundaries, i.e. in the ELF symbol table for cc1 cpp_named_operator2name is marked as being 32-bytes long, but it in fact is much larger than that, so we seem to need a symbols__find() routine that looks for >= current->start and < next_symbol->start, possibly just for C++ objects? For now lets just make some progress by marking jumps to outside the current function as call like. Actual navigation will come next, with further understanding of how the symbol searching and disassembly should be done. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-aiys0a0bsgm3e00hbi6fg7yy@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We need that to figure out if jumps have targets in a different function. E.g. _cpp_lex_token(), in /usr/libexec/gcc/x86_64-redhat-linux/5.3.1/cc1 has a line like this: jne c469be <cpp_named_operator2name@@base+0xa72> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-ris0ioziyp469pofpzix2atb@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 21 3月, 2018 22 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Since we already set notes->start to map__rip_2objdump(map, sym->start) in symbol__annotate2(), no need to calculate that address again in symbol__calc_lines(), just use notes->start. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-ycxlg8mm5ueuj21w6gi62l7g@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Just like we have in the histograms browser used as the main screen for 'perf top --tui' and 'perf report --tui', to print the current annotation to a file with a named composed by the symbol name and the ".annotation" suffix. Here is one example of pressing 'A' on 'perf top' to live annotate a kernel function and then press 'P' to dump that annotation, the resulting file: # cat _raw_spin_lock_irqsave.annotation _raw_spin_lock_irqsave() /proc/kcore Event: cycles:ppp 7.14 nop 21.43 push %rbx 7.14 pushfq pop %rax nop mov %rax,%rbx cli nop xor %eax,%eax mov $0x1,%edx 64.29 lock cmpxchg %edx,(%rdi) test %eax,%eax ↓ jne 2b mov %rbx,%rax pop %rbx ← retq 2b: mov %eax,%esi → callq queued_spin_lock_slowpath mov %rbx,%rax pop %rbx ← retq # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-zzmnrwugb5vtk7bvg0rbx150@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We've had this in 'perf top' for quite a while, useful if one wishes to force using /proc/kcore to do annotation using the patched kernel instead of the ELF image it started from, aka vmlinux. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-ircpvox4wzsv7gasrpb28fw9@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
This is already present in 'perf top', albeit undocumented (will fix), and is useful to use /proc/kcore instead of vmlinux and then get what is really in place, not what the kernel starts with, before alternatives, ftrace .text patching, etc, see the differences: # perf annotate --stdio2 _raw_spin_lock_irqsave _raw_spin_lock_irqsave() /lib/modules/4.16.0-rc4/build/vmlinux Event: anon group { cycles, instructions } 0.00 3.17 → callq __fentry__ 0.00 7.94 push %rbx 7.69 36.51 → callq __page_file_index mov %rax,%rbx 7.69 3.17 → callq *ffffffff82225cd0 xor %eax,%eax mov $0x1,%edx 80.77 49.21 lock cmpxchg %edx,(%rdi) test %eax,%eax ↓ jne 2b 3.85 0.00 mov %rbx,%rax pop %rbx ← retq 2b: mov %eax,%esi → callq queued_spin_lock_slowpath mov %rbx,%rax pop %rbx ← retq [root@jouet ~]# perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave _raw_spin_lock_irqsave() /proc/kcore Event: anon group { cycles, instructions } 0.00 3.17 nop 0.00 7.94 push %rbx 0.00 23.81 pushfq 7.69 12.70 pop %rax nop mov %rax,%rbx 7.69 3.17 cli nop xor %eax,%eax mov $0x1,%edx 80.77 49.21 lock cmpxchg %edx,(%rdi) test %eax,%eax ↓ jne 2b 3.85 0.00 mov %rbx,%rax pop %rbx ← retq 2b: mov %eax,%esi → callq *ffffffff820e96b0 mov %rbx,%rax pop %rbx ← retq # Diff of the output of those commands: # perf annotate --stdio2 _raw_spin_lock_irqsave > /tmp/vmlinux # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave > /tmp/kcore # diff -y /tmp/vmlinux /tmp/kcore _raw_spin_lock_irqsave() vmlinux | _raw_spin_lock_irqsave() /proc/kcore Event: anon group { cycles, instructions } Event: anon group { cycles, instructions } 0.00 3.17 → callq __fentry__ | 0.00 3.17 nop 0.00 7.94 push %rbx 0.00 7.94 push %rbx 7.69 36.51 → callq __page_file_index | 0.00 23.81 pushfq > 7.69 12.70 pop %rax > nop mov %rax,%rbx mov %rax,%rbx 7.69 3.17 → callq *ffffffff82225cd0 | 7.69 3.17 cli > nop xor %eax,%eax xor %eax,%eax mov $0x1,%edx mov $0x1,%edx 80.77 49.21 lock cmpxchg %edx,(%rdi) 80.77 49.21 lock cmpxchg %edx,(%rdi) test %eax,%eax test %eax,%eax ↓ jne 2b ↓ jne 2b 3.85 0.00 mov %rbx,%rax 3.85 0.00 mov %rbx,%rax pop %rbx pop %rbx ← retq ← retq 2b: mov %eax,%esi 2b: mov %eax,%esi → callq queued_spin_lock_slowpath| → callq *ffffffff820e96b0 mov %rbx,%rax mov %rbx,%rax pop %rbx pop %rbx ← retq ← retq # This should be further streamlined by doing both annotations and allowing the TUI to toggle initial/current, and show the patched instructions in a slightly different color. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-wz8d269hxkcwaczr0r4rhyjg@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
# perf annotate --stdio2 _raw_spin_lock_irqsave _raw_spin_lock_irqsave() /lib/modules/4.16.0-rc4/build/vmlinux Event: anon group { cycles, instructions } 0.00 3.17 → callq __fentry__ 0.00 7.94 push %rbx 7.69 36.51 → callq __page_file_index mov %rax,%rbx 7.69 3.17 → callq *ffffffff82225cd0 xor %eax,%eax mov $0x1,%edx 80.77 49.21 lock cmpxchg %edx,(%rdi) test %eax,%eax ↓ jne 2b 3.85 0.00 mov %rbx,%rax pop %rbx ← retq 2b: mov %eax,%esi → callq queued_spin_lock_slowpath mov %rbx,%rax pop %rbx ← retq # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-i86yfyzl8m194ioxgj1jo32f@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
With an empty '[annotate]' section in ~/.perfconfig: # perf record -a --all-kernel -e '{cycles,instructions}:P' sleep 5 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 2.243 MB perf.data (5513 samples) ] # perf annotate --stdio2 _raw_spin_lock | head -20 Disassembly of section .text: ffffffff81868790 <_raw_spin_lock>: _raw_spin_lock(): EXPORT_SYMBOL(_raw_spin_trylock_bh); #endif #ifndef CONFIG_INLINE_SPIN_LOCK void __lockfunc _raw_spin_lock(raw_spinlock_t *lock) { → callq __fentry__ atomic_cmpxchg(): return xadd(&v->counter, -i); } static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { # perf annotate --stdio2 _raw_spin_lock | head -20 → callq __fentry__ xor %eax,%eax mov $0x1,%edx 87.50 100.00 lock cmpxchg %edx,(%rdi) 6.25 0.00 test %eax,%eax ↓ jne 16 6.25 0.00 repz retq 16: mov %eax,%esi ↑ jmpq ffffffff810e96b0 <queued_spin_lock_slowpath> # # cat ~/.perfconfig [annotate] hide_src_code = false show_linenr = true # perf annotate --stdio2 _raw_spin_lock | head -20 3 Disassembly of section .text: 5 ffffffff81868790 <_raw_spin_lock>: 6 _raw_spin_lock(): 143 EXPORT_SYMBOL(_raw_spin_trylock_bh); 144 #endif 146 #ifndef CONFIG_INLINE_SPIN_LOCK 147 void __lockfunc _raw_spin_lock(raw_spinlock_t *lock) 148 { → callq __fentry__ 150 atomic_cmpxchg(): 187 return xadd(&v->counter, -i); 188 } 190 static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) 191 { # # cat ~/.perfconfig [annotate] hide_src_code = true show_total_period = true # perf annotate --stdio2 _raw_spin_lock | head -20 → callq __fentry__ xor %eax,%eax mov $0x1,%edx 1411316 152339 lock cmpxchg %edx,(%rdi) 344694 0 test %eax,%eax ↓ jne 16 80806 0 repz retq 16: mov %eax,%esi ↑ jmpq ffffffff810e96b0 <queued_spin_lock_slowpath> # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-nu4rxg5zkdtgs1b2gc40p7v7@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
One more thing that goes from the TUI code to be used more widely, for instance it'll affect the default options used by: perf annotate --stdio2 Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0nsz0dm0akdbo30vgja2a10e@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
This uses the TUI augmented formatting routines, modulo interactivity. # perf annotate --ignore-vmlinux --stdio2 _raw_spin_lock_irqsave _raw_spin_lock_irqsave() /proc/kcore Event: cycles:ppp Percent Disassembly of section load0: ffffffff9a8734b0 <load0>: nop push %rbx 50.00 pushfq pop %rax nop mov %rax,%rbx cli nop xor %eax,%eax mov $0x1,%edx 50.00 lock cmpxchg %edx,(%rdi) test %eax,%eax ↓ jne 2b mov %rbx,%rax pop %rbx ← retq 2b: mov %eax,%esi → callq queued_spin_lock_slowpath mov %rbx,%rax pop %rbx ← retq Tested-by: NJin Yao <yao.jin@linux.intel.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-6cte5o8z84mbivbvqlg14uh1@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Out of the TUI logic that allows toggling the presentation of source code lines. Will be used in the upcoming --stdio2 mode. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-g0ckz9ajy6unswrv2iy39mxk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To simplify the passing of arguments, the --stdio2 code will have to set all the fields with operations printing to stdout. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-pcs3c7vdy9ucygxflo4nl1o7@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We pass some more callbacks and all of annotate_browser__write() seems to be free of TUI code (except for some arrow constants, will fix). Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-5uo6yvwnxtsbe8y6v0ysaakf@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
For the --tui and --stdio2 cases using callbacks for print() and set_percent_color() end up being the easiest path, real GUI remains as an exercise. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-1o7az1ng55g2g6ppr2jpeuct@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We'll need it for some callbacks for the upcoming annotation__line_print() routines. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-t3qiobj4ua38xzsq8cyw9ky5@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Out of the annotate_browser__write() routine, to be used in the --stdio2 mode. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-0he0wyy4haswqi1qb35x37do@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
That does all the extended boilerplate the TUI browser did, leaving the symbol__annotate() function to be used by the old --stdio output mode. Now the upcoming --stdio2 output mode should just use this one to set things up. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-e2x8wuf6gvdhzdryo229vj4i@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
More non-TUI stuff goes to the UI-agnostic library Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-hngv7rpqvtta69ouj7ne770q@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Previous patch left it where it was to ease review, move it to its right place. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-ikdjr014p7k5kachgyjrgiey@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
This also will be used in other output formats, such as --stdio2. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-86h6ftebc62ij1rx8q9zkpwk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
More non-strictly TUI code being moved to the UI neutral annotation library, to be used in the upcoming --stdio2 output mode. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-ek20dnd8z2y5v54pcepihybz@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
More non-TUI stuff. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-yd4g6q0rngq4i49hz6iymtta@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Another field that is not TUI specific. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-jj3dwswndft5mln8hu9k0idv@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
The information in there are all related to things already moved to struct annotation, so move those members to struct annotation_line. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-uc2b9c8iocvuuvbl7hyind84@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-