- 02 12月, 2010 5 次提交
-
-
由 Shawn Bohrer 提交于
Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291168642-11402-6-git-send-email-shawn.bohrer@gmail.com> Signed-off-by: NShawn Bohrer <shawn.bohrer@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shawn Bohrer 提交于
Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291168642-11402-5-git-send-email-shawn.bohrer@gmail.com> Signed-off-by: NShawn Bohrer <shawn.bohrer@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shawn Bohrer 提交于
The --displacement and --modules options to perf diff both use -m as a short flag. Change --displacement to use -M since other perf commands use -m, --modules. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291168642-11402-4-git-send-email-shawn.bohrer@gmail.com> Signed-off-by: NShawn Bohrer <shawn.bohrer@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shawn Bohrer 提交于
Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291168642-11402-3-git-send-email-shawn.bohrer@gmail.com> Signed-off-by: NShawn Bohrer <shawn.bohrer@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shawn Bohrer 提交于
Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1291168642-11402-2-git-send-email-shawn.bohrer@gmail.com> Signed-off-by: NShawn Bohrer <shawn.bohrer@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 01 12月, 2010 13 次提交
-
-
由 Corey Ashford 提交于
There are number of issues that prevent the use of multiple tracepoint events being specified in a -e/--event switch, separated by commas. For example, perf stat -e irq:irq_handler_entry,irq:irq_handler_exit ... fails because the tracepoint event parsing code doesn't recognize the comma separator properly. This patch corrects those issues. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Julia Lawall <julia@diku.dk> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: NMichael Ellerman <michaele@au1.ibm.com> LKML-Reference: <1291156021-17711-1-git-send-email-cjashfor@linux.vnet.ibm.com> Signed-off-by: NCorey Ashford <cjashfor@linux.vnet.ibm.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Don Zickus 提交于
There seems to be a new dependency on arch/*/lib/memcpy*.S when compiling the perf tool. Make sure that file is included in the MANIFEST when creating the tarball. Cc: Ingo Molnar <mingo@elte.hu> LKML-Reference: <1291155133-3499-2-git-send-email-dzickus@redhat.com> Signed-off-by: NDon Zickus <dzickus@redhat.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
No need to check that many times if debug_trace is on. Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <new-submission> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
The ordered sample code allocates singular reference objects struct sample_queue which have 48byte size on 64bit and 20 bytes on 32bit. That's silly. Allocate ~64k sized chunks and hand them out. Performance gain: ~ 15% Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.398713983@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
When the sample queue is flushed we free the sample reference objects. Though we need to malloc new objects when we process further. Stop the malloc/free orgy and cache the already allocated object for resuage. Only allocate when the cache is empty. Performance gain: ~ 10% Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.338488630@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
Profiling perf with perf revealed that a large part of the processing time is spent in malloc/memcpy/free in the sample ordering code. That code copies the data from the mmap into malloc'ed memory. That's silly. We can keep the mmap and just store the pointer in the queuing data structure. For 64 bit this is not a problem as we map the whole file anyway. On 32bit we keep 8 maps around and unmap the oldest before mmaping the next chunk of the file. Performance gain: 2.95s -> 1.23s (Faktor 2.4) Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.278787719@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
On 64bit we can map the whole file in one go, on 32bit we can at least map 32MB and not map/unmap tiny chunks of the file. Base the progress bar on 1/16 of the data size. Preparatory patch to get rid of the malloc/memcpy/free of trace data. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.213687773@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
No need to check twice. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.152886642@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
The progress bar is changed when the file offset changes. This happens only when the next mmap is done. No need to call ui_progress_update() for every event. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.094836523@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
Replace the pseudo C++ self argument with session and give the mmap related variables a sensible name. shift is a complete misnomer - it took me several rounds of cursing to figure out that it's not a shift value. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163820.029687218@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
There is no reason to use a struct sample_event pointer in struct sample_queue and type cast it when flushing the queue. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163819.969462809@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Thomas Gleixner 提交于
The homebrewn sort algorithm fails to sort in time order. One of the problem spots is that it fails to deal with equal timestamps correctly. My first gut reaction was to replace the fancy list with an rbtree, but the performance is 3 times worse. Rewrite it so it works. Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101130163819.908482530@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
PERF_SAMPLE_{CALLCHAIN,RAW} have variable lenghts per sample, but the others can be precalculated, reducing a bit the per sample cost. Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Ian Munsie <imunsie@au1.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <new-submission> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 27 11月, 2010 5 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
Fix it by explaining what can be happening and giving the number of processed and lost events. Also holler if unknown events were found, that can be due to processing a perf.data file collected using a newer tool where newer events got added on reporting using an older perf tool, that or a bug, so ask for a report to be made. Works on both --tui and --stdio. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> LKML-Reference: <new-submission> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Shawn Bohrer 提交于
Some filesystems like xfs and reiserfs will return DT_UNKNOWN for the d_type. Handle this case by calling stat() to determine the type. Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290355779-3276-1-git-send-email-sbohrer@rgmadvisors.com> Signed-off-by: NShawn Bohrer <sbohrer@rgmadvisors.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Munsie 提交于
If a 32bit userspace perf is running on a 64bit kernel, the end of the final map in the kernel would incorrectly be set to 2^32-1 rather than 2^64-1. Cc: Ingo Molnar <mingo@elte.hu> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290658375-10342-1-git-send-email-imunsie@au1.ibm.com> Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Tool developers have to fill in a 'perf_event_ops' method table to specify how to handle each event, so far the ones that were not explicitely especified would get a stub that would just discard the event. Change that so that tool developers can get the lost event details and the total number of such events at the end of 'perf report -D' output. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> CC: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
Collecting build-ids for long running sessions may take a long time because it needs to traverse the whole just collected perf.data stream of events, marking the DSOs that had hits and then looking for the .note.gnu.build-id ELF section. For things like the 'trace' tool that records and right away consumes the data on systems where its unlikely that the DSOs being monitored will change while 'trace' runs, it is desirable to remove build id collection, so add a -B/--no-buildid option to perf record to allow such use case. Longer term we'll avoid all this if we, at DSO load time, in the kernel, take advantage of this slow code path to collect the build-id and stash it somewhere, so that we can insert it in the PERF_RECORD_MMAP event. Reported-by: NThomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 26 11月, 2010 16 次提交
-
-
由 Cyrill Gorcunov 提交于
Add description of .config in a sake of RAW events. At least this should bring some light to those who will be reading this code. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Reviewed-by: NStephane Eranian <eranian@google.com> Cc: Lin Ming <ming.m.lin@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
The perf hardware pmu got initialized at various points in the boot, some before early_initcall() some after (notably arch_initcall). The problem is that the NMI lockup detector is ran from early_initcall() and expects the hardware pmu to be present. Sanitize this by moving all architecture hardware pmu implementations to initialize at early_initcall() and move the lockup detector to an explicit initcall right after that. Cc: paulus <paulus@samba.org> Cc: davem <davem@davemloft.net> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290707759.2145.119.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Andi Kleen 提交于
When booting up a CPU set the various topology masks before calling the CPU_STARTING notifier. This way the notifier can actually use the masks. This is needed for a perf change. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290077254-12165-2-git-send-email-andi@firstfloor.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Some arch implementations call perf_event_overflow() by 'accident', ignore this. Reported-by: NFrancis Moreau <francis.moro@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Franck Bui-Huu 提交于
Signed-off-by: NFranck Bui-Huu <fbuihuu@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290525705-6265-3-git-send-email-fbuihuu@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Franck Bui-Huu 提交于
Signed-off-by: NFranck Bui-Huu <fbuihuu@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290525705-6265-2-git-send-email-fbuihuu@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Franck Bui-Huu 提交于
and use it when appropriate. Signed-off-by: NFranck Bui-Huu <fbuihuu@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290525705-6265-1-git-send-email-fbuihuu@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Make tags find the trace-event definitions Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NWANG Cong <xiyou.wangcong@gmail.com> LKML-Reference: <1290591835.2072.438.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Conflicts: arch/x86/kernel/apic/hw_nmi.c Merge reason: Resolve conflict, queue up dependent patch. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Merge reason: Pick up latest fixes. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Stephane noticed that because the perf_sw_event() call is inside the perf_event_task_sched_out() call it won't get called unless we have a per-task counter. Reported-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
This leads to a Kconfig dep inversion, x86 selects PERF_EVENT (due to a hw_breakpoint dep) but doesn't unconditionally provide HAVE_PERF_EVENT. (This can cause build failures on M386/M486 kernel .config's.) Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101117222055.982965150@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Don Zickus 提交于
In a kvm virt guests, the perf counters are not emulated. Instead they return zero on a rdmsrl. The perf nmi handler uses the fact that crossing a zero means the counter overflowed (for those counters that do not have specific interrupt bits). Therefore on kvm guests, perf will swallow all NMIs thinking the counters overflowed. This causes problems for subsystems like kgdb which needs NMIs to do its magic. This problem was discovered by running kgdb tests. The solution is to write garbage into a perf counter during the initialization and hopefully reading back the same number. On kvm guests, the value will be read back as zero and we disable perf as a result. Reported-by: NJason Wessel <jason.wessel@windriver.com> Patch-inspired-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NDon Zickus <dzickus@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <1290462923-30734-1-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
It was found that sometimes children of tasks with inherited events had one extra event. Eventually it turned out to be due to the list rotation no being exclusive with the list iteration in the inheritance code. Cure this by temporarily disabling the rotation while we inherit the events. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Hitoshi Mitake 提交于
perf bench: Add feature that measures the performance of the arch/x86/lib/memcpy_64.S memcpy routines via 'perf bench mem' This patch ports arch/x86/lib/memcpy_64.S to perf bench mem memcpy for benchmarking memcpy() in userland with tricky and dirty way. util/include/asm/cpufeature.h, util/include/asm/dwarf2.h, and util/include/linux/linkage.h are mostly dummy files with small wrappers, so that we are able to include memcpy_64.S unmodified. Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: h.mitake@gmail.com Cc: Miao Xie <miaox@cn.fujitsu.com> Cc: Ma Ling <ling.ma@intel.com> Cc: Zhao Yakui <yakui.zhao@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andi Kleen <andi@firstfloor.org> LKML-Reference: <1290668693-27068-2-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Hitoshi Mitake 提交于
After applying this patch, perf bench mem memcpy prints both of prefualted and without prefaulted score of memcpy(). New options --no-prefault and --only-prefault are added to print single result, mainly for scripting usage. Usage example: | mitake@X201i:~/linux/.../tools/perf% ./perf bench mem memcpy -l 500MB | # Running mem/memcpy benchmark... | # Copying 500MB Bytes ... | | 634.969014 MB/Sec | 4.828062 GB/Sec (with prefault) | mitake@X201i:~/linux/.../tools/perf% ./perf bench mem memcpy -l 500MB --only-prefault | # Running mem/memcpy benchmark... | # Copying 500MB Bytes ... | | 4.705192 GB/Sec (with prefault) | mitake@X201i:~/linux/.../tools/perf% ./perf bench mem memcpy -l 500MB --no-prefault | # Running mem/memcpy benchmark... | # Copying 500MB Bytes ... | | 642.725568 MB/Sec Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: h.mitake@gmail.com Cc: Miao Xie <miaox@cn.fujitsu.com> Cc: Ma Ling <ling.ma@intel.com> Cc: Zhao Yakui <yakui.zhao@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andi Kleen <andi@firstfloor.org> LKML-Reference: <1290668693-27068-1-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 11月, 2010 1 次提交
-
-
由 Rabin Vincent 提交于
At least on ARM, padding is inserted between rb_node and sym in struct symbol_name_rb_node, causing "((void *)sym) - sizeof(struct rb_node)" to point inside rb_node rather than to the symbol_name_rb_node. Fix this by converting the code to use container_of(). Cc: Ian Munsie <imunsie@au1.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ming Lei <tom.leiming@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <20101123163106.GA25677@debian> Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-