- 03 6月, 2015 2 次提交
-
-
由 Wang Nan 提交于
Before patch ba92732e ('perf kmaps: Check kmaps to make code more robust'), 'perf report' and 'perf annotate' will segfault if trace data contains kernel module information like this: # perf report -D -i ./perf.data ... 0 0 0x188 [0x50]: PERF_RECORD_MMAP -1/0: [0xffffffbff1018000(0xf068000) @ 0]: x [test_module] ... # perf report -i ./perf.data --objdump=/path/to/objdump --kallsyms=/path/to/kallsyms perf: Segmentation fault -------- backtrace -------- /path/to/perf[0x503478] /lib64/libc.so.6(+0x3545f)[0x7fb201f3745f] /path/to/perf[0x499b56] /path/to/perf(dso__load_kallsyms+0x13c)[0x49b56c] /path/to/perf(dso__load+0x72e)[0x49c21e] /path/to/perf(map__load+0x6e)[0x4ae9ee] /path/to/perf(thread__find_addr_map+0x24c)[0x47deec] /path/to/perf(perf_event__preprocess_sample+0x88)[0x47e238] /path/to/perf[0x43ad02] /path/to/perf[0x4b55bc] /path/to/perf(ordered_events__flush+0xca)[0x4b57ea] /path/to/perf[0x4b1a01] /path/to/perf(perf_session__process_events+0x3be)[0x4b428e] /path/to/perf(cmd_report+0xf11)[0x43bfc1] /path/to/perf[0x474702] /path/to/perf(main+0x5f5)[0x42de95] /lib64/libc.so.6(__libc_start_main+0xf4)[0x7fb201f23bd4] /path/to/perf[0x42dfc4] This is because __kmod_path__parse treats '[' leading names as kernel name instead of names of kernel module. If perf.data contains build information and the buildid of such modules can be found, the dso->kernel of it will be set to DSO_TYPE_KERNEL by __event_process_build_id(), not kernel module. It will then be passed to dso__load() -> dso__load_kernel_sym() -> dso__load_kcore() if --kallsyms is provided. The refered patch adds NULL pointer checker to avoid segfault. However, such kernel modules are still processed incorrectly. This patch fixes __kmod_path__parse, makes it treat names like '[test_module]' as kernel modules. kmod-path.c is also update to reflect the above changes. Signed-off-by: NWang Nan <wangnan0@huawei.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/r/1433321541-170245-1-git-send-email-wangnan0@huawei.com [ Fixed the merged with 0443f36b ("perf machine: Fix the search for the kernel DSO on the unified list" ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
When unifying the user_dsos and kernel_dsos a bug was introduced by inverting the check for dso->kernel, fix it. Fixes: 3d39ac53 ("perf machine: No need to have two DSOs lists") Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-xnrnq0kams3s2z9ek1wjb506@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 29 5月, 2015 4 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
To make it consistent with the other dso lifetime routines. For instance: struct dso *vdso__new(struct machine *machine, const char *short_name, const char *long_name) Becomes: struct dso *machine__addnew_vdso(struct machine *machine, const char *short_name, const char *long_name) Because: 1) There is no 'struct vdso' for us to have vdso__ prefixed routines. 2) Because it will not really just create a new instance of 'struct dso', it'll call dso__new() but it will also insert it into the DSO's list/rbtree, and we have a method name for that: 'addnew', just like we have dsos__addnew(). 3) So it is really a 'struct machine' operation, it is the first argument, etc. This way the place where this is used gets consistent: if (vdso) { pgoff = 0; - dso = vdso__dso_findnew(machine, thread); + dso = machine__findnew_vdso(machine, thread); } else dso = machine__findnew_dso(machine, filename); Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-r3w3tvh8exm9xfz3p4tz9qbz@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Similar to machine__findnew_thread(), also prepping for refcounting and locking, this time for struct dso instances. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-fv3tshv5o1413coh147lszjc@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We can, given a DSO, figure out if it is a kernel, a kernel module or a userlevel DSO, so stop having to process two lists in several functions. If searching becomes an issue at some point, we can have them in a rbtree, etc. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-s4yb0onpdywu6dj2xl9lxi4t@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
It never was a 'struct dso' method, so fix that by rename dso__kernel_findnew() to machine__findnew_kernel(). At some point I'll move it all to the machine.[ch] files, for now lets ease patch review by not moving too much stuff. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-zrxmblgsg5vx0iv4rhvq2f6l@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 28 5月, 2015 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
We have pointers to struct map instances in several places, like in the hist_entry instances, so we need a way to know when we can destroy them, otherwise we may either keep leaking them or end up referencing deleted instances. Start fixing it by reference counting them. This patch puts the reference count for struct map in place, replacing direct map__delete() calls with map__put() ones and then grabbing a reference count when adding it to the maps struct where maps for a struct thread are kept. Next we'll grab reference counts when setting pointers to struct map instances, in places like in the hist_entry code. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-wi19xczk0t2a41r1i2chuio5@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 27 5月, 2015 2 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
We use: BUG_ON(!RB_EMPTY_NODE(&thread->rb_node)); in the thread destructor as a debugging check to find out about possibly still referenced thread instances being deleted, to do that we need to make sure we use RB_CLEAR_NODE() right after rb_erase(), i.e. that we use the newly introduced rb_erase_init(), that works just like list_del_init(). Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4fcqo5ypy1cjjf15ilb0hn78@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
It could be used somewhere, so just call map__groups_put() to make sure we don't delete it prematurely Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-dxmh8mr12i65p8h909vi88cp@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 16 5月, 2015 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Use atomic_read(&counter) instead. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-k3hvfvpaut8wp02lzq27muhb@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 12 5月, 2015 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Since it is all associated with the refcount for keeping the thread in the rbtree, it is excessive and unecessarily complex to hold a refcont when changing machine->last_match. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-98kuesmfwtvhsrzx7ttyb0kt@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 09 5月, 2015 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
In addition to using refcounts for the struct thread lifetime management, we need to protect access to machine->threads from concurrent access. That happens in 'perf top', where a thread processes events, inserting and deleting entries from that rb_tree while another thread decays hist_entries, that end up dropping references and ultimately deleting threads from the rb_tree and releasing its resources when no further hist_entry (or other data structures, like in 'perf sched') references it. So the rule is the same for refcounts + protected trees in the kernel, get the tree lock, find object, bump the refcount, drop the tree lock, return, use object, drop the refcount if no more use of it is needed, keep it if storing it in some other data structure, drop when releasing that data structure. I.e. pair "t = machine__find(new)_thread()" with a "thread__put(t)", and "perf_event__preprocess_sample(&al)" with "addr_location__put(&al)". The addr_location__put() one is because as we return references to several data structures, we may end up adding more reference counting for the other data structures and then we'll drop it at addr_location__put() time. Acked-by: NDavid Ahern <dsahern@gmail.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-bs9rt4n0jw3hi9f3zxyy3xln@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 06 5月, 2015 2 次提交
-
-
由 Adrian Hunter 提交于
Add support for the PERF_RECORD_ITRACE_START event type. This event can be used to determine the pid and tid that are running when Instruction Tracing starts. Generally that information would come from a sched_switch event but, at the start, no sched_switch events may yet have been recorded. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1430404667-10593-8-git-send-email-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Adrian Hunter 提交于
Add support for the PERF_RECORD_AUX event type. PERF_RECORD_AUX is a new kernel event that records when new data lands in the AUX buffer. Currently it is assumed that AUX data follows the same ring buffer conventions used by the perf events buffer, and consequently the AUX event is not processed during recording. It is processed during session processing so that the information in the 'flags' member is made available. The format of PERF_RECORD_AUX is outlined in the linux/perf_events.h header file. The 'flags' are also enumerated. Intel PT and Intel BTS use the flag named PERF_AUX_FLAG_TRUNCATED to determine if data has been lost because the buffer became full as perf was not able to empty it fast enough. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1430404667-10593-7-git-send-email-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 10 4月, 2015 1 次提交
-
-
由 He Kuang 提交于
commit: f3b623b8 ("perf tools: Reference count struct thread") appends every thread->node to dead_threads in machine__remove_thread() and list_del_init() this node in thread__put(). perf_event__exit_del_thread() releases thread wihout using machine__remove_thread(), and causes a NULL pointer crash when list_del_init(&thread->node) is called. Fix this by using machine_remove_thread() instead of using thread__put() directly. This problem can be reproduced as following: $ perf record ls $ perf buildid-list --with-hits [ 3874.195070] perf[1018]: segfault at 0 ip 00000000004b0b15 sp 00007ffc35b44780 error 6 in perf[400000+166000] Segmentation fault After this patch: $ perf record ls $ perf buildid-list --with-hits bc23e7c3281e542650ba4324421d6acf78f4c23e /proc/kcore 643324cb0e969f30c56d660f167f84a150845511 [vdso] 0000000000000000000000000000000000000000 /bin/busybox ... Signed-off-by: NHe Kuang <hekuang@huawei.com> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/1428658500-6483-1-git-send-email-hekuang@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 08 4月, 2015 1 次提交
-
-
由 Wang Nan 提交于
This patch add checks in places where map__kmap is used to get kmaps from struct kmap. Error messages are added at map__kmap to warn invalid accessing of kmap (for the case of !map->dso->kernel, kmap(map) does not exists at all). Also, introduces map__kmaps() to warn uninitialized kmaps. Reviewed-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NWang Nan <wangnan0@huawei.com> Cc: pi3orama@163.com Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/r/1428394966-131044-2-git-send-email-wangnan0@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 01 4月, 2015 1 次提交
-
-
由 David Hildenbrand 提交于
Commit 2e77784b ("perf callchain: Move cpumode resolve code to add_callchain_ip") promised "No change in behavior.". As this commit breaks callchains on s390x (symbols not getting resolved, observed when profiling the kernel), this statement is wrong. The cpumode must be kept when iterating over all ips, otherwise the default (PERF_RECORD_MISC_USER) will be used by error. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1427703060-59883-1-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 23 3月, 2015 3 次提交
-
-
由 Jiri Olsa 提交于
Currently we assume machine__new_module is called only once for each module so we create its map&dso unconditionally. However it's possible that it's called multiple times for same module. Like for perf record: 1) via machine__create_module during machine init 2) via kernel MMAP event processing Trying to lookup kernel module map before creating one. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-kx76xfqpnrpho5hdaapbqm09@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jiri Olsa 提交于
We no longer need the 'compressed' argument, because all current users use 'NULL' for it. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-d72q2s7ggbmy2yzhumux4zzw@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jiri Olsa 提交于
Replacing the file name parsing with kmod_path__parse and moving the dso update into new separate function. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-q0ed76ajcyoaofotntrg5sla@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 22 3月, 2015 2 次提交
-
-
由 Jiri Olsa 提交于
Using kmod_path__parse to get the module name and update the dso short name within machine__new_dso function. This way it's done only first time when dso is created, unlike the current way when we update it all the time we process memory map of the kernel module. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-8gjmt1ggf5ls1xkk7qi2ko4k@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Jiri Olsa 提交于
Separate the dso object addition and update when adding new kernel module. Currently we update dso's symtab_type any time we find it in the list, because we can't distinguish between new and found dso from __dsos__findnew function. Adding machine__module_dso that separates finding and adding new dso objects, so there's no superfluous update of dso. Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-uvqgs5tyq4wssnq6fm43hgvk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 03 3月, 2015 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
We need to do that to stop accumulating entries in the dead_threads linked list, i.e. we were keeping references to threads in struct hists that continue to exist even after a thread exited and was removed from the machine threads rbtree. We still keep the dead_threads list, but just for debugging, allowing us to iterate at any given point over the threads that still are referenced by things like struct hist_entry. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-3ejvfyed0r7ue61dkurzjux4@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 19 2月, 2015 1 次提交
-
-
由 Kan Liang 提交于
LBR call stack only has user-space callchains. It is output in the PERF_SAMPLE_BRANCH_STACK data format. For kernel callchains, it's still in the form of PERF_SAMPLE_CALLCHAIN. The perf tool has to handle both data sources to construct a complete callstack. For the "perf report -D" option, both lbr and fp information will be displayed. A new call chain recording option "lbr" is introduced into the perf tool for LBR call stack. The user can use --call-graph lbr to get the call stack information from hardware. Here are some examples. When profiling bc(1) on Fedora 19: echo 'scale=2000; 4*a(1)' > cmd; perf record --call-graph lbr bc -l < cmd If enabling LBR, perf report output looks like: 50.36% bc bc [.] bc_divide | --- bc_divide execute run_code yyparse main __libc_start_main _start 33.66% bc bc [.] _one_mult | --- _one_mult bc_divide execute run_code yyparse main __libc_start_main _start 7.62% bc bc [.] _bc_do_add | --- _bc_do_add | |--99.89%-- 0x2000186a8 --0.11%-- [...] 6.83% bc bc [.] _bc_do_sub | --- _bc_do_sub | |--99.94%-- bc_add | execute | run_code | yyparse | main | __libc_start_main | _start --0.06%-- [...] 0.46% bc libc-2.17.so [.] __memset_sse2 | --- __memset_sse2 | |--54.13%-- bc_new_num | | | |--51.00%-- bc_divide | | execute | | run_code | | yyparse | | main | | __libc_start_main | | _start | | | |--30.46%-- _bc_do_sub | | bc_add | | execute | | run_code | | yyparse | | main | | __libc_start_main | | _start | | | --18.55%-- _bc_do_add | bc_add | execute | run_code | yyparse | main | __libc_start_main | _start | --45.87%-- bc_divide execute run_code yyparse main __libc_start_main _start If using FP, perf report output looks like: echo 'scale=2000; 4*a(1)' > cmd; perf record --call-graph fp bc -l < cmd 50.49% bc bc [.] bc_divide | --- bc_divide 33.57% bc bc [.] _one_mult | --- _one_mult 7.61% bc bc [.] _bc_do_add | --- _bc_do_add 0x2000186a8 6.88% bc bc [.] _bc_do_sub | --- _bc_do_sub 0.42% bc libc-2.17.so [.] __memcpy_ssse3_back | --- __memcpy_ssse3_back If using LBR, perf report -D output looks like: 3458145275743 0x2fd750 [0xd8]: PERF_RECORD_SAMPLE(IP, 0x2): 9748/9748: 0x408ea8 period: 609644 addr: 0 ... LBR call chain: nr:8 ..... 0: fffffffffffffe00 ..... 1: 0000000000408e50 ..... 2: 000000000040a458 ..... 3: 000000000040562e ..... 4: 0000000000408590 ..... 5: 00000000004022c0 ..... 6: 00000000004015dd ..... 7: 0000003d1cc21b43 ... FP chain: nr:2 ..... 0: fffffffffffffe00 ..... 1: 0000000000408ea8 ... thread: bc:9748 ...... dso: /usr/bin/bc The LBR call stack has the following known limitations: - Zero length calls are not filtered out by the hardware - Exception handing such as setjmp/longjmp will have calls/returns not match - Pushing different return address onto the stack will have calls/returns not match - If callstack is deeper than the LBR, only the last entries are captured Tested-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NKan Liang <kan.liang@intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Simon Que <sque@chromium.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1420482185-29830-3-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 1月, 2015 1 次提交
-
-
由 Namhyung Kim 提交于
When thread__init_map_groups() fails, a new thread should be removed from the rbtree since it's gonna be freed. Also update last match cache only if the function succeeded. Reported-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1420763892-15535-1-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 09 12月, 2014 1 次提交
-
-
由 Kan Liang 提交于
Using flag to distinguish between branch_history and normal callchain. Move the cpumode to add_callchain_ip function. No change in behavior. Signed-off-by: NKan Liang <kan.liang@intel.com> Acked-by: NJiri Olsa <jolsa@redhat.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1417532814-26208-3-git-send-email-kan.liang@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 02 12月, 2014 1 次提交
-
-
由 Andi Kleen 提交于
Currently branch stacks can be only shown as edge histograms for individual branches. I never found this display particularly useful. This implements an alternative mode that creates histograms over complete branch traces, instead of individual branches, similar to how normal callgraphs are handled. This is done by putting it in front of the normal callgraph and then using the normal callgraph histogram infrastructure to unify them. This way in complex functions we can understand the control flow that lead to a particular sample, and may even see some control flow in the caller for short functions. Example (simplified, of course for such simple code this is usually not needed), please run this after the whole patchkit is in, as at this point in the patch order there is no --branch-history, that will be added in a patch after this one: tcall.c: volatile a = 10000, b = 100000, c; __attribute__((noinline)) f2() { c = a / b; } __attribute__((noinline)) f1() { f2(); f2(); } main() { int i; for (i = 0; i < 1000000; i++) f1(); } % perf record -b -g ./tsrc/tcall [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.044 MB perf.data (~1923 samples) ] % perf report --no-children --branch-history ... 54.91% tcall.c:6 [.] f2 tcall | |--65.53%-- f2 tcall.c:5 | | | |--70.83%-- f1 tcall.c:11 | | f1 tcall.c:10 | | main tcall.c:18 | | main tcall.c:18 | | main tcall.c:17 | | main tcall.c:17 | | f1 tcall.c:13 | | f1 tcall.c:13 | | f2 tcall.c:7 | | f2 tcall.c:5 | | f1 tcall.c:12 | | f1 tcall.c:12 | | f2 tcall.c:7 | | f2 tcall.c:5 | | f1 tcall.c:11 | | | --29.17%-- f1 tcall.c:12 | f1 tcall.c:12 | f2 tcall.c:7 | f2 tcall.c:5 | f1 tcall.c:11 | f1 tcall.c:10 | main tcall.c:18 | main tcall.c:18 | main tcall.c:17 | main tcall.c:17 | f1 tcall.c:13 | f1 tcall.c:13 | f2 tcall.c:7 | f2 tcall.c:5 | f1 tcall.c:12 The default output is unchanged. This is only implemented in perf report, no change to record or anywhere else. This adds the basic code to report: - add a new "branch" option to the -g option parser to enable this mode - when the flag is set include the LBR into the callstack in machine.c. The rest of the history code is unchanged and doesn't know the difference between LBR entry and normal call entry. - detect overlaps with the callchain - remove small loop duplicates in the LBR Current limitations: - The LBR flags (mispredict etc.) are not shown in the history and LBR entries have no special marker. - It would be nice if annotate marked the LBR entries somehow (e.g. with arrows) v2: Various fixes. v3: Merge further patches into this one. Fix white space. v4: Improve manpage. Address review feedback. v5: Rename functions. Better error message without -g. Fix crash without -b. v6: Rebase v7: Rebase. Use NO_ENTRY in memset. v8: Port to latest tip. Move add_callchain_ip to separate patch. Skip initial entries in callchain. Minor cleanups. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1415844328-4884-3-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Namhyung Kim 提交于
Jiri reported that the commit 96d78059 ("perf tools: Make vmlinux short name more like kallsyms short name") segfaults on perf script. When processing kernel mmap event, it should access the 'kernel' variable as sometimes it cannot find a matching dso from build-id table so 'dso' might be invalid. Reported-by: NJiri Olsa <jolsa@redhat.com> Tested-by: NJiri Olsa <jolsa@redhat.com> Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1416285028-30572-1-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 19 11月, 2014 2 次提交
-
-
由 Andi Kleen 提交于
Use the relative address, this makes get_srcline work correctly in the end. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1415844328-4884-4-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Andi Kleen 提交于
Move the code to resolve and add a new callchain entry into a new add_callchain_ip function. This will be used in the next patches to add LBRs too. No change in behavior. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/1415844328-4884-2-git-send-email-andi@firstfloor.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 05 11月, 2014 2 次提交
-
-
由 Namhyung Kim 提交于
The previous patch changed kernel dso name from '[kernel.kallsyms]' to vmlinux. However it might add confusion to old users accustomed to the old name. So change the short name to '[kernel.vmlinux]' to reduce such confusion. Before: # Overhead Command Shared Object Symbol # ........ .............. ....................... ............................... # 9.83% swapper vmlinux [k] intel_idle 4.10% awk libc-2.20.so [.] __strcmp_sse2 1.86% sed libc-2.20.so [.] __strcmp_sse2 1.78% netctl-auto libc-2.20.so [.] __strcmp_sse2 1.23% netctl-auto libc-2.20.so [.] __mbrtowc 1.21% firefox libxul.so [.] 0x00000000024b62bd 1.20% swapper vmlinux [k] cpuidle_enter_state 1.03% sleep vmlinux [k] copy_user_generic_unrolled After: # Overhead Command Shared Object Symbol # ........ .............. ....................... ............................... # 9.83% swapper [kernel.vmlinux] [k] intel_idle 4.10% awk libc-2.20.so [.] __strcmp_sse2 1.86% sed libc-2.20.so [.] __strcmp_sse2 1.78% netctl-auto libc-2.20.so [.] __strcmp_sse2 1.23% netctl-auto libc-2.20.so [.] __mbrtowc 1.21% firefox libxul.so [.] 0x00000000024b62bd 1.20% swapper [kernel.vmlinux] [k] cpuidle_enter_state 1.03% sleep [kernel.vmlinux] [k] copy_user_generic_unrolled Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1415063674-17206-9-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Namhyung Kim 提交于
There's a problem on finding correct kernel symbols when perf report runs on a different kernel. Although a part of the problem was solved by the prior commit 0a7e6d1b ("perf tools: Check recorded kernel version when finding vmlinux"), there's a remaining problem still. When perf records samples, it synthesizes the kernel map using machine__mmap_name() and ref_reloc_sym like "[kernel.kallsyms]_text". You can easily see it using 'perf report -D' command. After finishing record, it goes through the recorded events to find maps/dsos actually used. And then record build-id info of them. During this process, it needs to load symbols in a dso and it'd call dso__load_vmlinux_path() since the default value of the symbol_conf. try_vmlinux_path is true. However it changes dso->long_name to a real path of the vmlinux file (e.g. /lib/modules/3.16.4/build/vmlinux) if one is running on a custom kernel. It resulted in that perf report reads the build-id of the vmlinux, but cannot use it since it only knows about the [kernel.kallsyms] map. It then falls back to possible vmlinux paths by using the recorded kernel version (in case of a recent version) or a running kernel silently. Even with the recent tools, this still has a possibility of breaking the result. As the build directory is a symbolic link, if one built a new kernel in the same directory with different source/config, the old link to vmlinux will point the new file. So it's absolutely needed to use build-id when finding a kernel image. In this patch, it's now changed to try to search a kernel dso in the existing dso list which was constructed during build-id table parsing so it'll always have a build-id. If not found, search "[kernel.kallsyms]". Before: $ perf report # Children Self Command Shared Object Symbol # ........ ........ ....... ................. ............................... # 72.15% 0.00% swapper [kernel.kallsyms] [k] set_curr_task_rt 72.15% 0.00% swapper [kernel.kallsyms] [k] native_calibrate_tsc 72.15% 0.00% swapper [kernel.kallsyms] [k] tsc_refine_calibration_work 71.87% 71.87% swapper [kernel.kallsyms] [k] module_finalize ... After (for the same perf.data): 72.15% 0.00% swapper vmlinux [k] cpu_startup_entry 72.15% 0.00% swapper vmlinux [k] arch_cpu_idle 72.15% 0.00% swapper vmlinux [k] default_idle 71.87% 71.87% swapper vmlinux [k] native_safe_halt ... Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NIngo Molnar <mingo@kernel.org> Link: http://lkml.kernel.org/r/20140924073356.GB1962@gmail.com Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1415063674-17206-8-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 04 11月, 2014 1 次提交
-
-
由 Namhyung Kim 提交于
This patch adds basic support to handle compressed kernel module as some distro (such as Archlinux) carries on it now. The actual work using compression library will be added later. Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1415063674-17206-2-git-send-email-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 29 10月, 2014 4 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
The unwind__get_entries() already receives the thread parameter, from where it can obtain the matching machine structure, shorten the signature. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-isjc6bm8mv4612mhi6af64go@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Shortening function signature lenght too, since a thread's machine can be obtained from thread->mg->machine, no need to pass thread, machine. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-5wb6css280ty0cel5p0zo2b1@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
So stop passing both machine and thread to several thread methods, reducing function signature length. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-ckcy19dcp1jfkmdihdjcqdn1@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We were setting this only in machine__init(), i.e. for the map_groups that holds the kernel module maps, not for the one used for a thread's executable mmaps. Now we are sure that we can obtain the machine where a thread is by going via thread->mg->machine, thus we can, in the following patch, make all codepaths that receive machine _and_ thread, drop the machine one. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-y6zgaqsvhrf04v57u15e4ybm@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 15 10月, 2014 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
A segfault happens on 'perf test hists_link' because we end up using a struct machines on the stack, and then machines__init() was not initializing the newly introduced rb_root, just the existing list_head. When we introduced struct dsos, to group the two ways to store dsos, i.e. the linked list and the rbtree, we didn't turned the initialization done in: machines__init(machines->host) -> machine__init() -> INIT_LIST_HEAD into a dsos__init() to keep on initializing the list_head but _as well_ initializing the rb_root, oops. All worked because outside perf-test we probably zalloc the whole thing which ends up initializing it in to NULL. So the problem looks contained to 'perf test' that uses it on stack, etc. Reported-by: NJiri Olsa <jolsa@redhat.com> Acked-by: Waiman Long <Waiman.Long@hp.com>, Cc: Adrian Hunter <adrian.hunter@intel.com>, Cc: Don Zickus <dzickus@redhat.com> Cc: Douglas Hatch <doug.hatch@hp.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott J Norton <scott.norton@hp.com> Cc: Waiman Long <Waiman.Long@hp.com>, Link: http://lkml.kernel.org/r/20141014180353.GF3198@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 02 10月, 2014 1 次提交
-
-
由 Waiman Long 提交于
With workload that spawns and destroys many threads and processes, it was found that perf-mem could took a long time to post-process the perf data after the target workload had completed its operation. The performance bottleneck was found to be the lookup and insertion of the new DSO structures (thousands of them in this case). In a dual-socket Ivy-Bridge E7-4890 v2 machine (30-core, 60-thread), the perf profile below shows what perf was doing after the profiled AIM7 shared workload completed: - 83.94% perf libc-2.11.3.so [.] __strcmp_sse42 - __strcmp_sse42 - 99.82% map__new machine__process_mmap_event perf_session_deliver_event perf_session__process_event __perf_session__process_events cmd_record cmd_mem run_builtin main __libc_start_main - 13.17% perf perf [.] __dsos__findnew __dsos__findnew map__new machine__process_mmap_event perf_session_deliver_event perf_session__process_event __perf_session__process_events cmd_record cmd_mem run_builtin main __libc_start_main So about 97% of CPU times were spent in the map__new() function trying to insert new DSO entry into the DSO linked list. The whole post-processing step took about 9 minutes. The DSO structures are currently searched linearly. So the total processing time will be proportional to n^2. To overcome this performance problem, the DSO code is modified to also put the DSO structures in a RB tree sorted by its long name in additional to being in a simple linked list. With this change, the processing time will become proportional to n*log(n) which will be much quicker for large n. However, the short name will still be searched using the old linear searching method. With that patch in place, the same perf-mem post-processing step took less than 30 seconds to complete. Signed-off-by: NWaiman Long <Waiman.Long@hp.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Douglas Hatch <doug.hatch@hp.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott J Norton <scott.norton@hp.com> Link: http://lkml.kernel.org/r/1412098575-27863-3-git-send-email-Waiman.Long@hp.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 30 9月, 2014 1 次提交
-
-
由 Waiman Long 提交于
This is a precursor patch to enable long name searching of DSOs using a rbtree. In this patch, a new dsos structure is created which contains only a list head structure for the moment. The new dsos structure is used, in turn, in the machine structure for the user_dsos and kernel_dsos fields. Only the following 3 dsos functions are modified to accept the new dsos structure parameter instead of list_head: - dsos__add() - dsos__find() - __dsos__findnew() Signed-off-by: NWaiman Long <Waiman.Long@hp.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Douglas Hatch <doug.hatch@hp.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott J Norton <scott.norton@hp.com> Link: http://lkml.kernel.org/r/1412021249-19201-2-git-send-email-Waiman.Long@hp.com [ Move struct dsos to dso.h to reduce the dso methods depends on machine.h ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-