- 02 3月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
The ANY flag can show SMT data of another task (like 'top'), so we want to disable it when system-wide profiling is disabled. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 3月, 2010 4 次提交
-
-
由 Robert Richter 提交于
For consistency reasons this patch renames ARCH_PERFMON_EVENTSEL0_ENABLE to ARCH_PERFMON_EVENTSEL_ENABLE. The following is performed: $ sed -i -e s/ARCH_PERFMON_EVENTSEL0_ENABLE/ARCH_PERFMON_EVENTSEL_ENABLE/g \ arch/x86/include/asm/perf_event.h arch/x86/kernel/cpu/perf_event.c \ arch/x86/kernel/cpu/perf_event_p6.c \ arch/x86/kernel/cpu/perfctr-watchdog.c \ arch/x86/oprofile/op_model_amd.c arch/x86/oprofile/op_model_ppro.c Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch moves code from oprofile to perf_event.h to make it also available for usage by perf. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 28 2月, 2010 4 次提交
-
-
由 Frederic Weisbecker 提交于
Remove the name field from the arch_hw_breakpoint. We never deal with target symbols in the arch level, neither do we need to ever store it. It's a legacy for the previous version of the x86 breakpoint backend. Let's remove it. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: K.Prasad <prasad@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org>
-
由 Frederic Weisbecker 提交于
Remove pointless union in the breakpoint field of hw_perf_event. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org>
-
由 Frederic Weisbecker 提交于
We need to deal with time ordered events to build a correct state machine of lock events. This is why we multiplex the lock events buffers. But the ordering is done from the kernel, on the tracing fast path, leading to high contention between cpus. Without multiplexing, the events appears in a weak order. If we have four events, each split per cpu, perf record will read the events buffers in the following order: [ CPU0 ev0, CPU0 ev1, CPU0 ev3, CPU0 ev4, CPU1 ev0, CPU1 ev0....] To handle a post processing reordering, we could just read and sort the whole in memory, but it just doesn't scale with high amounts of events: lock events can fill huge amounts in few times. Basically we need to sort in memory and find a "grace period" point when we know that a given slice of previously sorted events can be committed for post-processing, so that we can unload the memory usage step by step and keep a scalable sorting list. There is no strong rules about how to define such "grace period". What does this patch is: We define a FLUSH_PERIOD value that defines a grace period in seconds. We want to have a slice of events covering 2 * FLUSH_PERIOD in our sorted list. If FLUSH_PERIOD is big enough, it ensures every events that occured in the first half of the timeslice have all been buffered and there are none remaining and there won't be further to put inside this first timeslice. Then once we reach the 2 * FLUSH_PERIOD timeslice, we flush the first half to be gentle with the memory (the second half can still get new events in the middle, so wait another period to flush it) FLUSH_PERIOD is defined to 5 seconds. Say the first event started on time t0. We can safely assume that at the time we are processing events of t0 + 10 seconds, ther won't be anymore events to read from perf.data that occured between t0 and t0 + 5 seconds. Hence we can safely flush the first half. To point out funky bugs, we have a guardian that checks a new event timestamp is not below the last event's timestamp flushed and that displays a warning in this case. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com>
-
由 Hitoshi Mitake 提交于
I've forgot to add 'perf lock' line to command-list.txt, so users of perf could not find perf lock when they type 'perf'. Fixing command-list.txt requires document (tools/perf/Documentation/perf-lock.txt). But perf lock is too much "under construction" to write a stable document, so this is something like pseudo document for now. And I wrote description of perf lock at help section of CONFIG_LOCK_STAT, this will navigate users of lock trace events. Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> LKML-Reference: <1265267295-8388-1-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 27 2月, 2010 4 次提交
-
-
由 Tejun Heo 提交于
Add __percpu sparse annotations to hw_breakpoint. These annotations are to make sparse consider percpu variables to be in a different address space and warn if accessed without going through percpu accessors. This patch doesn't affect normal builds. In kernel/hw_breakpoint.c, per_cpu(nr_task_bp_pinned, cpu)'s will trigger spurious noderef related warnings from sparse. Changing it to &per_cpu(nr_task_bp_pinned[0], cpu) will work around the problem but deemed to ugly by the maintainer. Leave it alone until better solution can be found. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: K.Prasad <prasad@linux.vnet.ibm.com> LKML-Reference: <4B7B4B7A.9050902@kernel.org> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Frederic Weisbecker 提交于
Merge reason: __percpu annotations need the corresponding sparse address space definition upstream. Conflicts: tools/perf/util/probe-event.c (trivial)
-
由 Peter Zijlstra 提交于
Avoid kernels from exploding on AMD machines when they have any lock debugging bits enabled. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
A recent commit introduced a preemption warning for perf_clock(), use raw_smp_processor_id() to avoid this, it really doesn't matter which cpu we use here. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1267198583.22519.684.camel@laptop> Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 2月, 2010 21 次提交
-
-
由 David S. Miller 提交于
Even though we don't register the counters until the child is right about to exec(), we're still going to get at least a few events while the fork()'d child is still executing 'perf' and in particular we're going to get the MMAP events. We can't distinguish the ones in the newly executed process because the PID will be the same. One way to solve this would be to have a PERF_RECORD_EXEC event, and when this is seen 'perf' can flush it's map cache. We can't use PERF_RECORD_COMM since that's generated by other things, not just exec(). Actually, thinking about it some more, using PERF_RECORD_COMM might be a good enough approximation. Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1267196914-16238-1-git-send-email-acme@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Split amd,p6,intel into separate files so that we can easily deal with CONFIG_CPU_SUP_* things, needed to make things build now that perf_event.c relies on symbols from amd.c Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arnaldo Carvalho de Melo 提交于
Without this patch we get this for need_resched: [root@mica ~]# perf annotate need_resched ------------------------------------------------ Percent | Source code & Disassembly of vmlinux ------------------------------------------------ : : : Disassembly of section .text: : : ffffffff810095ed <need_resched>: : return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); : } : : static inline int need_resched(void) : { 0.00 : ffffffff810095ed: 55 push %rbp : return unlikely(test_thread_flag(TIF_NEED_RESCHED)); 0.00 : ffffffff810095ee: be 03 00 00 00 mov $0x3,%esi : : static inline struct thread_info *current_thread_info(void) : { : struct thread_info *ti; : ti = (void *)(percpu_read_stable(kernel_stack) + 0.00 : ffffffff810095f3: 65 48 8b 3c 25 48 b5 mov %gs:0xb548,%rdi 0.00 : ffffffff810095fa: 00 00 : return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); : } : : static inline int need_resched(void) : { 0.00 : ffffffff810095fc: 48 89 e5 mov %rsp,%rbp : return unlikely(test_thread_flag(TIF_NEED_RESCHED)); 0.00 : ffffffff810095ff: 48 81 ef d8 1f 00 00 sub $0x1fd8,%rdi 0.00 : ffffffff81009606: e8 9d ff ff ff callq ffffffff810095a8 <test_ti_thread_flag> : } 0.00 : ffffffff8100960b: c9 leaveq 0.00 : ffffffff8100960c: 85 c0 test %eax,%eax 0.00 : ffffffff8100960e: 0f 95 c0 setne %al 0.00 : ffffffff81009611: 0f b6 c0 movzbl %al,%eax : Disassembly of section .vsyscall_0: : Disassembly of section .vsyscall_fn: : Disassembly of section .vsyscall_1: : Disassembly of section .vsyscall_2: : Disassembly of section .init.text: : Disassembly of section .altinstr_replacement: : Disassembly of section .exit.text: [root@mica ~]# But from the 'perf report' result we know that there are hits for need_resched on a 4 way machine mostly doing nothing, so after adding code to show what is in each hist offset and collapsing IP hits for what happens between objdump lines we get, for the same perf.data file: [root@mica ~]# perf annotate -v need_resched ------------------------------------------------ Percent | Source code & Disassembly of vmlinux ------------------------------------------------ : : : Disassembly of section .text: : : ffffffff810095ed <need_resched>: : return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); : } : : static inline int need_resched(void) : { 0.00 : ffffffff810095ed: 55 push %rbp : return unlikely(test_thread_flag(TIF_NEED_RESCHED)); 52.78 : ffffffff810095ee: be 03 00 00 00 mov $0x3,%esi : : static inline struct thread_info *current_thread_info(void) : { : struct thread_info *ti; : ti = (void *)(percpu_read_stable(kernel_stack) + 0.00 : ffffffff810095f3: 65 48 8b 3c 25 48 b5 mov %gs:0xb548,%rdi 0.00 : ffffffff810095fa: 00 00 : return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); : } : : static inline int need_resched(void) : { 0.00 : ffffffff810095fc: 48 89 e5 mov %rsp,%rbp : return unlikely(test_thread_flag(TIF_NEED_RESCHED)); 9.72 : ffffffff810095ff: 48 81 ef d8 1f 00 00 sub $0x1fd8,%rdi 0.00 : ffffffff81009606: e8 9d ff ff ff callq ffffffff810095a8 <test_ti_thread_flag> : } 0.00 : ffffffff8100960b: c9 leaveq 0.00 : ffffffff8100960c: 85 c0 test %eax,%eax 37.50 : ffffffff8100960e: 0f 95 c0 setne %al 0.00 : ffffffff81009611: 0f b6 c0 movzbl %al,%eax : Disassembly of section .vsyscall_0: : Disassembly of section .vsyscall_fn: : Disassembly of section .vsyscall_1: : Disassembly of section .vsyscall_2: : Disassembly of section .init.text: : Disassembly of section .altinstr_replacement: : Disassembly of section .exit.text: [root@mica ~]# And now 'perf annotate -v', verbose mode, will show the hits per precise IP, so that one can make sense of the attribution to each objdumop line: [root@mica ~]# perf annotate -v need_resched Looking at the vmlinux_path (5 entries long) Using /lib/modules/2.6.33-rc8-tip-00784-g3471df5-dirty/build/vmlinux for symbols annotate_sym: filename=/lib/modules/2.6.33-rc8-tip-00784-g3471df5-dirty/build/vmlinux, sym=need_resched, start=0xffffffff810095ed, end=0xffffffff81009614 ------------------------------------------------ Percent | Source code & Disassembly of vmlinux ------------------------------------------------ ffffffff810095f1: 152 ffffffff81009603: 28 ffffffff8100960f: 55 ffffffff81009610: 53 h->sum: 288 <SNIP same annotation> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: David Miller <davem@davemloft.net> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1267194194-15670-1-git-send-email-acme@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
During switching virtual counters there is access to perfctr msrs. If the counter is not available this fails due to an invalid address. This patch fixes this. Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Multiple virtual counters share one physical counter. The reservation of virtual counters fails due to duplicate allocation of the same counter. The counters are already reserved. Thus, virtual counter reservation may removed at all. This also makes the code easier. Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Naga Chumbalkar 提交于
Currently, oprofile fails silently on platforms where a non-OS entity such as the system firmware "enables" and uses a performance counter. There is a warning in the code for this case. The warning indicates an already running counter. If oprofile doesn't collect data, then try using a different performance counter on your platform to monitor the desired event. Delete the counter from the desired event by editing the /usr/share/oprofile/<cpu_type>/<cpu>/events file. If the event cannot be monitored by any other counter, contact your hardware or BIOS vendor. Cc: Shashi Belur <shashi-kiran.belur@hp.com> Cc: Tony Jones <tonyj@suse.de> Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch generates a warning if a counter is already active. Implemented for AMD and P6 models. P4 is not supported. Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Cc: Shashi Belur <shashi-kiran.belur@hp.com> Cc: Tony Jones <tonyj@suse.de> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
IBS selects an op (execution operation) for sampling by counting either cycles or dispatched ops. Better statistical samples can be produced by adding a software generated random offset to the periodic op counter value with each sample. This patch adds software randomization to the IBS periodic op counter. The lower 12 bits of the 20 bit counter are randomized. IbsOpCurCnt is initialized with a 12 bit random value. There is a work around if the hw can not write to IbsOpCurCnt. Then the lower 8 bits of the 16 bit IbsOpMaxCnt [15:0] value are randomized in the range of -128 to +127 by adding/subtracting an offset to the maximum count (IbsOpMaxCnt). The linear feedback shift register (LFSR) algorithm is used for pseudo-random number generation to have low impact to the memory system. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Suravee Suthikulpanit 提交于
This patch implements a linear feedback shift register (LFSR) for pseudo-random number generation for IBS. For IBS measurements it would be good to minimize memory traffic in the interrupt handler since every access pollutes the data caches. Computing a maximal period LFSR just needs shifts and ORs. The LFSR method is good enough to randomize the ops at low overhead. 16 pseudo-random bits are enough for the implementation and it doesn't matter that the pattern repeats with a fairly short cycle. It only needs to break up (hard) periodic sampling behavior. The logic was designed by Paul Drongowski. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch adds IBS feature detection using cpuid flags. An IBS capability mask is introduced to test for certain IBS features. The bit mask is the same as for IBS cpuid feature flags (Fn8000_001B_EAX), but bit 0 is used to indicate the existence of IBS. The patch also changes the handling of the IbsOpCntCtl bit (periodic op counter count control). The oprofilefs file for this feature (ibs_op/dispatched_ops) will be only exposed if the feature is available, also the default for the bit is set to count clock cycles. In general, the userland can detect the availability of a feature by checking for the corresponding file in oprofilefs. If it exists, the feature also exists. This may lead to a dynamic file layout depending on the cpu type with that the userland has to deal with. Current opcontrol is compatible. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Standard AMD systems have the same number of nodes as there are northbridge devices. However, there may kernel configurations (especially for 32 bit) or system setups exist, where the node number is different or it can not be detected properly. Thus the check is not reliable and may fail though IBS setup was fine. For this reason it is better to remove the check. Cc: stable <stable@kernel.org> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
OProfile support for IBS is now for several versions in the kernel. The feature is stable now and the code can be activated permanently. As a side effect IBS now works also on nosmp configs. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
OProfile is already used for a long time and no longer experimental. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
The commit 1155de47 ring-buffer: Make it generally available already made ring-buffer available without the TRACING option enabled. This patch removes the TRACING dependency from oprofile. Fixes also oprofile configuration on ia64. The patch also applies to the 2.6.32-stable kernel. Reported-by: NTony Jones <tonyj@suse.de> Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Peter Zijlstra 提交于
We re-program the event control register every time we reset the count, this appears to be superflous, hence remove it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arjan van de Ven <arjan@linux.intel.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the cpu argument to hw_perf_group_sched_in() is always smp_processor_id(), simplify the code a little by removing this argument and using the current cpu where needed. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1265890918.5396.3.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
This patch adds correct AMD NorthBridge event scheduling. NB events are events measuring L3 cache, Hypertransport traffic. They are identified by an event code >= 0xe0. They measure events on the Northbride which is shared by all cores on a package. NB events are counted on a shared set of counters. When a NB event is programmed in a counter, the data actually comes from a shared counter. Thus, access to those counters needs to be synchronized. We implement the synchronization such that no two cores can be measuring NB events using the same counters. Thus, we maintain a per-NB allocation table. The available slot is propagated using the event_constraint structure. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4b703957.0702d00a.6bf2.7b7d@mx.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
In certain situations, the kernel may need to stop and start the same event rapidly. The current PMU callbacks do not distinguish between stop and release (i.e., stop + free the resource). Thus, a counter may be released, then it will be immediately re-acquired. Event scheduling will again take place with no guarantee to assign the same counter. On some processors, this may event yield to failure to assign the event back due to competion between cores. This patch is adding a new pair of callback to stop and restart a counter without actually release the underlying counter resource. On stop, the counter is stopped, its values saved and that's it. On start, the value is reloaded and counter is restarted (on x86, actual restart is delayed until perf_enable()). Signed-off-by: NStephane Eranian <eranian@google.com> [ added fallback to ->enable/->disable for all other PMUs fixed x86_pmu_start() to call x86_pmu.enable() merged __x86_pmu_disable into x86_pmu_stop() ] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4b703875.0a04d00a.7896.ffffb824@mx.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
DaveM reported that currently perf interprets the pgoff value reported by the MMAP events as a byte range, but the kernel reports it as a page offset. Since its broken (and unusable) anyway, change the kernel behaviour (ABI) to report bytes indeed, avoiding the need for userspace to deal with PAGE_SIZE things. Reported-by: NDavid Miller <davem@davemloft.net> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arnaldo Carvalho de Melo 提交于
Because symbol->end is not fixed up at symbol_filter time, only after all symbols for a DSO are loaded, and that, for asm symbols, may be bogus, causing segfaults when hits happen in these symbols. Reported-by: NDavid Miller <davem@davemloft.net> Reported-by: NAnton Blanchard <anton@samba.org> Acked-by: NDavid Miller <davem@davemloft.net> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: <stable@kernel.org> # for .33.x. Does not apply cleanly, needs backport. LKML-Reference: <20100225155740.GB8553@ghostprotocols.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 2月, 2010 6 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Be more clear about DSO long names and tell from which file kernel symbols were obtained, all in --verbose mode: [root@mica ~]# perf report -v > /dev/null Looking at the vmlinux_path (5 entries long) Using /lib/modules/2.6.33-rc8-tip-00777-g0918527-dirty/build/vmlinux for symbols [root@mica ~]# mv /lib/modules/2.6.33-rc8-tip-00777-g0918527-dirty/build/vmlinux /tmp/dd [root@mica ~]# perf report -v > /dev/null Looking at the vmlinux_path (5 entries long) Using /proc/kallsyms for symbols [root@mica ~]# Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1266866139-6361-1-git-send-email-acme@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arnaldo Carvalho de Melo 提交于
To overcome a silly gcc warning: cc1: warnings being treated as errors builtin-top.c: In function ‘lookup_sym_source’: builtin-top.c:291: warning: not protecting local variables: variable length buffer make: *** [builtin-top.o] Error 1 make: *** Waiting for unfinished jobs.... That is emitted for this: const size_t pattern_len = BITS_PER_LONG / 4 + 2; char pattern[pattern_len + 1]; Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1266866062-6287-1-git-send-email-acme@infradead.org> [ -v2: macroify the naming style ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Zhang, Yanmin 提交于
In function dso__split_kallsyms(), curr_map saves the return value of map__new2. So check it instead of var map after the call returns. Signed-off-by: NZhang Yanmin <yanmin_zhang@linux.intel.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: <stable@kernel.org> # for .33.x Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1267066851.1726.9.camel@localhost> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frederic Weisbecker 提交于
syscall_name() helper, which resolves a syscall arch number to its name, is not yet available as we first need to implement event injection for it to work. Remove it from the documentation or tag its references as unavailable yet. Once it's implemented, we can just revert the current patch. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Keiichi KII <k-keiichi@bx.jp.nec.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
-
由 Tom Zanussi 提交于
Also small update to perf-trace-perl and perf-trace docs. Signed-off-by: NTom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Keiichi KII <k-keiichi@bx.jp.nec.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <1264580883-15324-13-git-send-email-tzanussi@gmail.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Tom Zanussi 提交于
If we know the size of a tuple in advance, there's no need to resize it - start out with the known size in the first place. Signed-off-by: NTom Zanussi <tzanussi@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Keiichi KII <k-keiichi@bx.jp.nec.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> LKML-Reference: <1266822779.6426.4.camel@tropicana> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-