- 31 5月, 2011 2 次提交
-
-
由 Robert Richter 提交于
This fixes the A->B/B->A locking dependency, see the warning below. The function task_exit_notify() is called with (task_exit_notifier) .rwsem set and then calls sync_buffer() which locks buffer_mutex. In sync_start() the buffer_mutex was set to prevent notifier functions to be started before sync_start() is finished. But when registering the notifier, (task_exit_notifier).rwsem is locked too, but now in different order than in sync_buffer(). In theory this causes a locking dependency, what does not occur in practice since task_exit_notify() is always called after the notifier is registered which means the lock is already released. However, after checking the notifier functions it turned out the buffer_mutex in sync_start() is unnecessary. This is because sync_buffer() may be called from the notifiers even if sync_start() did not finish yet, the buffers are already allocated but empty. No need to protect this with the mutex. So we fix this theoretical locking dependency by removing buffer_mutex in sync_start(). This is similar to the implementation before commit: 750d857c oprofile: fix crash when accessing freed task structs which introduced the locking dependency. Lockdep warning: oprofiled/4447 is trying to acquire lock: (buffer_mutex){+.+...}, at: [<ffffffffa0000e55>] sync_buffer+0x31/0x3ec [oprofile] but task is already holding lock: ((task_exit_notifier).rwsem){++++..}, at: [<ffffffff81058026>] __blocking_notifier_call_chain+0x39/0x67 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 ((task_exit_notifier).rwsem){++++..}: [<ffffffff8106557f>] lock_acquire+0xf8/0x11e [<ffffffff81463a2b>] down_write+0x44/0x67 [<ffffffff810581c0>] blocking_notifier_chain_register+0x52/0x8b [<ffffffff8105a6ac>] profile_event_register+0x2d/0x2f [<ffffffffa00013c1>] sync_start+0x47/0xc6 [oprofile] [<ffffffffa00001bb>] oprofile_setup+0x60/0xa5 [oprofile] [<ffffffffa00014e3>] event_buffer_open+0x59/0x8c [oprofile] [<ffffffff810cd3b9>] __dentry_open+0x1eb/0x308 [<ffffffff810cd59d>] nameidata_to_filp+0x60/0x67 [<ffffffff810daad6>] do_last+0x5be/0x6b2 [<ffffffff810dbc33>] path_openat+0xc7/0x360 [<ffffffff810dbfc5>] do_filp_open+0x3d/0x8c [<ffffffff810ccfd2>] do_sys_open+0x110/0x1a9 [<ffffffff810cd09e>] sys_open+0x20/0x22 [<ffffffff8146ad4b>] system_call_fastpath+0x16/0x1b -> #0 (buffer_mutex){+.+...}: [<ffffffff81064dfb>] __lock_acquire+0x1085/0x1711 [<ffffffff8106557f>] lock_acquire+0xf8/0x11e [<ffffffff814634f0>] mutex_lock_nested+0x63/0x309 [<ffffffffa0000e55>] sync_buffer+0x31/0x3ec [oprofile] [<ffffffffa0001226>] task_exit_notify+0x16/0x1a [oprofile] [<ffffffff81467b96>] notifier_call_chain+0x37/0x63 [<ffffffff8105803d>] __blocking_notifier_call_chain+0x50/0x67 [<ffffffff81058068>] blocking_notifier_call_chain+0x14/0x16 [<ffffffff8105a718>] profile_task_exit+0x1a/0x1c [<ffffffff81039e8f>] do_exit+0x2a/0x6fc [<ffffffff8103a5e4>] do_group_exit+0x83/0xae [<ffffffff8103a626>] sys_exit_group+0x17/0x1b [<ffffffff8146ad4b>] system_call_fastpath+0x16/0x1b other info that might help us debug this: 1 lock held by oprofiled/4447: #0: ((task_exit_notifier).rwsem){++++..}, at: [<ffffffff81058026>] __blocking_notifier_call_chain+0x39/0x67 stack backtrace: Pid: 4447, comm: oprofiled Not tainted 2.6.39-00007-gcf4d8d4 #10 Call Trace: [<ffffffff81063193>] print_circular_bug+0xae/0xbc [<ffffffff81064dfb>] __lock_acquire+0x1085/0x1711 [<ffffffffa0000e55>] ? sync_buffer+0x31/0x3ec [oprofile] [<ffffffff8106557f>] lock_acquire+0xf8/0x11e [<ffffffffa0000e55>] ? sync_buffer+0x31/0x3ec [oprofile] [<ffffffff81062627>] ? mark_lock+0x42f/0x552 [<ffffffffa0000e55>] ? sync_buffer+0x31/0x3ec [oprofile] [<ffffffff814634f0>] mutex_lock_nested+0x63/0x309 [<ffffffffa0000e55>] ? sync_buffer+0x31/0x3ec [oprofile] [<ffffffffa0000e55>] sync_buffer+0x31/0x3ec [oprofile] [<ffffffff81058026>] ? __blocking_notifier_call_chain+0x39/0x67 [<ffffffff81058026>] ? __blocking_notifier_call_chain+0x39/0x67 [<ffffffffa0001226>] task_exit_notify+0x16/0x1a [oprofile] [<ffffffff81467b96>] notifier_call_chain+0x37/0x63 [<ffffffff8105803d>] __blocking_notifier_call_chain+0x50/0x67 [<ffffffff81058068>] blocking_notifier_call_chain+0x14/0x16 [<ffffffff8105a718>] profile_task_exit+0x1a/0x1c [<ffffffff81039e8f>] do_exit+0x2a/0x6fc [<ffffffff81465031>] ? retint_swapgs+0xe/0x13 [<ffffffff8103a5e4>] do_group_exit+0x83/0xae [<ffffffff8103a626>] sys_exit_group+0x17/0x1b [<ffffffff8146ad4b>] system_call_fastpath+0x16/0x1b Reported-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Cc: Carl Love <carll@us.ibm.com> Cc: <stable@kernel.org> # .36+ Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
After registering the task free notifier we possibly have tasks in our dying_tasks list. Free them after unregistering the notifier in case of an error. Cc: <stable@kernel.org> # .36+ Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 29 10月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
flush_scheduled_work() is deprecated and scheduled to be removed. sync_stop() currently cancels cpu_buffer works inside buffer_mutex and flushes the system workqueue outside. Instead, split end_cpu_work() into two parts - stopping further work enqueues and flushing works - and do the former inside buffer_mutex and latter outside. For stable kernels v2.6.35.y and v2.6.36.y. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 25 8月, 2010 1 次提交
-
-
由 Robert Richter 提交于
This patch fixes a crash during shutdown reported below. The crash is caused by accessing already freed task structs. The fix changes the order for registering and unregistering notifier callbacks. All notifiers must be initialized before buffers start working. To stop buffer synchronization we cancel all workqueues, unregister the notifier callback and then flush all buffers. After all of this we finally can free all tasks listed. This should avoid accessing freed tasks. On 22.07.10 01:14:40, Benjamin Herrenschmidt wrote: > So the initial observation is a spinlock bad magic followed by a crash > in the spinlock debug code: > > [ 1541.586531] BUG: spinlock bad magic on CPU#5, events/5/136 > [ 1541.597564] Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6b6d03 > > Backtrace looks like: > > spin_bug+0x74/0xd4 > ._raw_spin_lock+0x48/0x184 > ._spin_lock+0x10/0x24 > .get_task_mm+0x28/0x8c > .sync_buffer+0x1b4/0x598 > .wq_sync_buffer+0xa0/0xdc > .worker_thread+0x1d8/0x2a8 > .kthread+0xa8/0xb4 > .kernel_thread+0x54/0x70 > > So we are accessing a freed task struct in the work queue when > processing the samples. Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: stable@kernel.org Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 24 9月, 2009 1 次提交
-
-
由 Li Zefan 提交于
Remove open-coded zalloc_cpumask_var() and zalloc_cpumask_var_node(). Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 30 3月, 2009 1 次提交
-
-
由 Russell King 提交于
Impact: fix ref to discarded function `buffer_sync_cleanup' referenced in section `.init.text' of arch/arm/oprofile/built-in.o: defined in discarded section `.exit.text' of arch/arm/oprofile/built-in.o Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 22 1月, 2009 1 次提交
-
-
由 Robert Richter 提交于
Delta patch to f7df8ed1 for tip/cpus4096. Moved initialization to sync_start()/sync_stop(). No changes needed in buffer_sync.h and oprof.c anymore. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 1月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Impact: use new cpumask API. Convert misc driver functions to use struct cpumask. To Do: - Convert iucv_buffer_cpumask to cpumask_var_t. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Acked-by: NDean Nelson <dcn@sgi.com> Cc: Robert Richter <robert.richter@amd.com> Cc: oprofile-list@lists.sf.net Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: virtualization@lists.osdl.org Cc: xen-devel@lists.xensource.com Cc: Ursula Braun <ursula.braun@de.ibm.com> Cc: linux390@de.ibm.com Cc: linux-s390@vger.kernel.org
-
- 08 1月, 2009 9 次提交
-
-
由 Robert Richter 提交于
The ifdefs can be removed since the code is no longer ibs specific and can be used for other purposes as well. IBS specific code is only in op_model_amd.c. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
The new ring buffer implementation allows the storage of samples with different size. This patch implements the usage of the new sample format to store ibs samples in the cpu buffer. Until now, writing to the cpu buffer could lead to incomplete sampling sequences since IBS samples were transfered in multiple samples. Due to a full buffer, data could be lost at any time. This can't happen any more since the complete data is reserved in advance and then stored in a single sample. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This function provides access to attached data of a sample. It returns the size of data including the current value. Also, op_cpu_buffer_get_size() is available to check if there is data attached. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Special events such as task or context switches are marked with an escape code in the cpu buffer followed by an event code or a task identifier. There is one escape code per event. To make escape sequences also available for data samples the internal cpu buffer format must be changed. The current implementation does not allow the extension of event codes since this would lead to collisions with the task identifiers. To avoid this, this patch introduces an event mask that allows the storage of multiple events with one escape code. Now, task identifiers are stored in the data section of the sample. The implementation also allows the usage of custom data in a sample. As a side effect the new code is much more readable and easier to understand. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This implements the support of samples with attached data. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This unifies usage of variable names within oprofile. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This code is broken since a TRACE_BEGIN_CODE is never sent to the daemon. The data becomes corrupt since the backtrace is interpreted as ibs sample. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 01 1月, 2009 1 次提交
-
-
由 Nick Piggin 提交于
struct dentry is one of the most critical structures in the kernel. So it's sad to see it going neglected. With CONFIG_PROFILING turned on (which is probably the common case at least for distros and kernel developers), sizeof(struct dcache) == 208 here (64-bit). This gives 19 objects per slab. I packed d_mounted into a hole, and took another 4 bytes off the inline name length to take the padding out from the end of the structure. This shinks it to 200 bytes. I could have gone the other way and increased the length to 40, but I'm aiming for a magic number, read on... I then got rid of the d_cookie pointer. This shrinks it to 192 bytes. Rant: why was this ever a good idea? The cookie system should increase its hash size or use a tree or something if lookups are a problem. Also the "fast dcookie lookups" in oprofile should be moved into the dcookie code -- how can oprofile possibly care about the dcookie_mutex? It gets dropped after get_dcookie() returns so it can't be providing any sort of protection. At 192 bytes, 21 objects fit into a 4K page, saving about 3MB on my system with ~140 000 entries allocated. 192 is also a multiple of 64, so we get nice cacheline alignment on 64 and 32 byte line systems -- any given dentry will now require 3 cachelines to touch all fields wheras previously it would require 4. I know the inline name size was chosen quite carefully, however with the reduction in cacheline footprint, it should actually be just about as fast to do a name lookup for a 36 character name as it was before the patch (and faster for other sizes). The memory footprint savings for names which are <= 32 or > 36 bytes long should more than make up for the memory cost for 33-36 byte names. Performance is a feature... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 30 12月, 2008 2 次提交
-
-
由 Robert Richter 提交于
Make code more readable. No functional changes. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This patch removes add_us_sample() and simplifies add_sample(). Code is much more readable now. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 29 12月, 2008 1 次提交
-
-
由 Robert Richter 提交于
This patch renames cpu buffer functions to something more oprofile specific names. Functions will be moved to the global name space. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 10 12月, 2008 6 次提交
-
-
由 Robert Richter 提交于
This patch replaces the current oprofile cpu buffer implementation with the ring buffer provided by the tracing framework. The motivation here is to leave the pain of implementing ring buffers to others. Oh, no, there are more advantages. Main reason is the support of different sample sizes that could be stored in the buffer. Use cases for this are IBS and Cell spu profiling. Using the new ring buffer ensures valid and complete samples and allows copying the cpu buffer stateless without knowing its content. Second it will use generic kernel API and also reduce code size. And hopefully, there are less bugs. Since the new tracing ring buffer implementation uses spin locks to protect the buffer during read/write access, it is difficult to use the buffer in an NMI handler. In this case, writing to the buffer by the NMI handler (x86) could occur also during critical sections when reading the buffer. To avoid this, there are 2 buffers for independent read and write access. Read access is in process context only, write access only in the NMI handler. If the read buffer runs empty, both buffers are swapped atomically. There is potentially a small window during swapping where the buffers are disabled and samples could be lost. Using 2 buffers is a little bit overhead, but the solution is clear and does not require changes in the ring buffer implementation. It can be changed to a single buffer solution when the ring buffer access is implemented as non-locking atomic code. The new buffer requires more size to store the same amount of samples because each sample includes an u32 header. Also, there is more code to execute for buffer access. Nonetheless, the buffer implementation is proven in the ftrace environment and worth to use also in oprofile. Patches that changes the internal IBS buffer usage will follow. Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This is in preparation for changes in the cpu buffer implementation. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This is in preparation for changes in the cpu buffer implementation. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
This is in preparation for changes in the cpu buffer implementation. Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 21 10月, 2008 1 次提交
-
-
由 Carl Love 提交于
The issue is the SPU code is not holding the kernel mutex lock while adding samples to the kernel buffer. This patch creates per SPU buffers to hold the data. Data is added to the buffers from in interrupt context. The data is periodically pushed to the kernel buffer via a new Oprofile function oprofile_put_buff(). The oprofile_put_buff() function is called via a work queue enabling the funtion to acquire the mutex lock. The existing user controls for adjusting the per CPU buffer size is used to control the size of the per SPU buffers. Similarly, overflows of the SPU buffers are reported by incrementing the per CPU buffer stats. This eliminates the need to have architecture specific controls for the per SPU buffers which is not acceptable to the OProfile user tool maintainer. The export of the oprofile add_event_entry() is removed as it is no longer needed given this patch. Note, this patch has not addressed the issue of indexing arrays by the spu number. This still needs to be fixed as the spu numbering is not guarenteed to be 0 to max_num_spus-1. Signed-off-by: NCarl Love <carll@us.ibm.com> Signed-off-by: NMaynard Johnson <maynardj@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NAcked-by: Robert Richter <robert.richter@amd.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 10月, 2008 1 次提交
-
-
由 Barry Kasindorf 提交于
The patch is needed since there is some IBS code in add_ibs_begin() that handles more than one sample per iteration. This requires calling get_slots() during each loop. This fixes the current problem, but a proper solution that reworks the cpu buffer synchronization is needed here in the future. Signed-off-by: NBarry Kasindorf <barry.kasindorf@amd.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 16 10月, 2008 2 次提交
-
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
- 26 7月, 2008 4 次提交
-
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: oprofile-list <oprofile-list@lists.sourceforge.net> Cc: Robert Richter <robert.richter@amd.com> Cc: Barry Kasindorf <barry.kasindorf@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Barry Kasindorf 提交于
This patchset supports the new profiling hardware available in the latest AMD CPUs in the oProfile driver. Signed-off-by: NBarry Kasindorf <barry.kasindorf@amd.com> Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: oprofile-list <oprofile-list@lists.sourceforge.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: oprofile-list <oprofile-list@lists.sourceforge.net> Cc: Barry Kasindorf <barry.kasindorf@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: oprofile-list <oprofile-list@lists.sourceforge.net> Cc: Barry Kasindorf <barry.kasindorf@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 28 4月, 2008 1 次提交
-
-
由 Mike Travis 提交于
Change cpu_buffer from array to per_cpu variable in oprofile functions. [akpm@linux-foundation.org: coding-style fixes] Cc: Philippe Elie <phil.el@wanadoo.fr> Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 2月, 2008 1 次提交
-
-
由 Jan Blunck 提交于
get_dcookie() is always called with a dentry and a vfsmount from a struct path. Make get_dcookie() take it directly as an argument. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NJan Blunck <jblunck@suse.de> Acked-by: NChristoph Hellwig <hch@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Neil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 7月, 2007 1 次提交
-
-
由 Bob Nelson 提交于
From: Maynard Johnson <mpjohn@us.ibm.com> This patch updates the existing arch/powerpc/oprofile/op_model_cell.c to add in the SPU profiling capabilities. In addition, a 'cell' subdirectory was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code. Exports spu_set_profile_private_kref and spu_get_profile_private_kref which are used by OProfile to store private profile information in spufs data structures. Also incorporated several fixes from other patches (rrn). Check pointer returned from kzalloc. Eliminated unnecessary cast. Better error handling and cleanup in the related area. 64-bit unsigned long parameter was being demoted to 32-bit unsigned int and eventually promoted back to unsigned long. Signed-off-by: NCarl Love <carll@us.ibm.com> Signed-off-by: NMaynard Johnson <mpjohn@us.ibm.com> Signed-off-by: NBob Nelson <rrnelson@us.ibm.com> Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org>
-
- 22 5月, 2007 1 次提交
-
-
由 Alexey Dobriyan 提交于
First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-