- 01 8月, 2007 1 次提交
-
-
由 Richard Purdie 提交于
Add some casts to the LZO compression algorithm after they were removed during cleanup and shouldn't have been. Signed-off-by: NRichard Purdie <rpurdie@openedhand.com> Cc: Edward Shishkin <edward@namesys.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 7月, 2007 1 次提交
-
-
由 Cornelia Huck 提交于
Leaving kobject_actions[] in kobject_uevent.c, but putting it outside the #ifdef looks indeed like the best solution to me. This way, we avoid adding #ifdef CONFIG_HOTPLUG into core.c, when all other functions called do not need such a thing. Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 25 7月, 2007 1 次提交
-
-
由 Stephen Rothwell 提交于
lib/fault-inject.c:168: warning: 'debugfs_create_ul_MAX_STACK_TRACE_DEPTH' defined but not used Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 7月, 2007 1 次提交
-
-
由 Keir Fraser 提交于
If the swiotlb maps a multi-slab region, swiotlb_sync_single_range() can be invoked to sync a sub-region which does not include the first slab. Unfortunately io_tlb_orig_addr[] is only initialised for the first slab, and hence the call to sync_single() will read a garbage orig_addr in this case. This patch fixes the issue by initialising all mapped slabs in io_tlb_orig_addr[]. It also correctly adjusts the buffer pointer in sync_single() to handle the case that the given dma_addr is not aligned on a slab boundary. Signed-off-by: NKeir Fraser <keir.fraser@cl.cam.ac.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Acked-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 7月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
Slab destructors were no longer supported after Christoph's c59def9f change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Peter Zijlstra 提交于
Introduce the core lock statistics code. Lock statistics provides lock wait-time and hold-time (as well as the count of corresponding contention and acquisitions events). Also, the first few call-sites that encounter contention are tracked. Lock wait-time is the time spent waiting on the lock. This provides insight into the locking scheme, that is, a heavily contended lock is indicative of a too coarse locking scheme. Lock hold-time is the duration the lock was held, this provides a reference for the wait-time numbers, so they can be put into perspective. 1) lock 2) ... do stuff .. unlock 3) The time between 1 and 2 is the wait-time. The time between 2 and 3 is the hold-time. The lockdep held-lock tracking code is reused, because it already collects locks into meaningful groups (classes), and because it is an existing infrastructure for lock instrumentation. Currently lockdep tracks lock acquisition with two hooks: lock() lock_acquire() _lock() ... code protected by lock ... unlock() lock_release() _unlock() We need to extend this with two more hooks, in order to measure contention. lock_contended() - used to measure contention events lock_acquired() - completion of the contention These are then placed the following way: lock() lock_acquire() if (!_try_lock()) lock_contended() _lock() lock_acquired() ... do locked stuff ... unlock() lock_release() _unlock() (Note: the try_lock() 'trick' is used to avoid instrumenting all platform dependent lock primitive implementations.) It is also possible to toggle the two lockdep features at runtime using: /proc/sys/kernel/prove_locking /proc/sys/kernel/lock_stat (esp. turning off the O(n^2) prove_locking functionaliy can help) [akpm@linux-foundation.org: build fixes] [akpm@linux-foundation.org: nuke unneeded ifdefs] Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NIngo Molnar <mingo@elte.hu> Acked-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 7月, 2007 1 次提交
-
-
由 Kay Sievers 提交于
This allows the uevent file to handle any type of uevent action to be triggered by userspace instead of just the "add" uevent. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 18 7月, 2007 4 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Rather than using a tri-state integer for the wait flag in call_usermodehelper_exec, define a proper enum, and use that. I've preserved the integer values so that any callers I've missed should still work OK. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Andi Kleen <ak@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Joel Becker <joel.becker@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: David Howells <dhowells@redhat.com>
-
由 Jeremy Fitzhardinge 提交于
argv_split() is a helper function which takes a string, splits it at whitespace, and returns a NULL-terminated argv vector. This is deliberately simple - it does no quote processing of any kind. [ Seems to me that this is something which is already being done in the kernel, but I couldn't find any other implementations, either to steal or replace. Keep an eye out. ] Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Randy Dunlap <randy.dunlap@oracle.com>
-
由 Jan Nikitenko 提交于
Add CRC7 routines, used for example in MMC over SPI communication. Kerneldoc updates [akpm@linux-foundation.org: fix funny mix of const and non-const] Signed-off-by: NJan Nikitenko <jan.nikitenko@gmail.com> Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Cc: "Randy.Dunlap" <rdunlap@xenotime.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Lameter 提交于
kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing variant in the past. But with __GFP_ZERO it is possible now to do zeroing while allocating. Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever we can. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 7月, 2007 10 次提交
-
-
由 Linus Torvalds 提交于
This should avoid build problems on architectures without a "readb()", that got bitten by check_signature() being uninlined. Noted by Heiko Carstens. Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Denis Vlasenko 提交于
Optimize integer-to-string conversion in vsprintf.c for base 10. This is by far the most used conversion, and in some use cases it impacts performance. For example, top reads /proc/$PID/stat for every process, and with 4000 processes decimal conversion alone takes noticeable time. Using code from http://www.cs.uiowa.edu/~jones/bcd/decimal.html (with permission from the author, Douglas W. Jones) binary-to-decimal-string conversion is done in groups of five digits at once, using only additions/subtractions/shifts (with -O2; -Os throws in some multiply instructions). On i386 arch gcc 4.1.2 -O2 generates ~500 bytes of code. This patch is run tested. Userspace benchmark/test is also attached. I tested it on PIII and AMD64 and new code is generally ~2.5 times faster. On AMD64: # ./vsprintf_verify-O2 Original decimal conv: .......... 151 ns per iteration Patched decimal conv: .......... 62 ns per iteration Testing correctness 12895992590592 ok... [Ctrl-C] # ./vsprintf_verify-O2 Original decimal conv: .......... 151 ns per iteration Patched decimal conv: .......... 62 ns per iteration Testing correctness 26025406464 ok... [Ctrl-C] More realistic test: top from busybox project was modified to report how many us it took to scan /proc (this does not account any processing done after that, like sorting process list), and then I test it with 4000 processes: #!/bin/sh i=4000 while test $i != 0; do sleep 30 & let i-- done busybox top -b -n3 >/dev/null on unpatched kernel: top: 4120 processes took 102864 microseconds to scan top: 4120 processes took 91757 microseconds to scan top: 4120 processes took 92517 microseconds to scan top: 4120 processes took 92581 microseconds to scan on patched kernel: top: 4120 processes took 75460 microseconds to scan top: 4120 processes took 66451 microseconds to scan top: 4120 processes took 67267 microseconds to scan top: 4120 processes took 67618 microseconds to scan The speedup comes from much faster generation of /proc/PID/stat by sprintf() calls inside the kernel. Signed-off-by: NDouglas W Jones <jones@cs.uiowa.edu> Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Denis Vlasenko 提交于
* There is no point in having full "0...9a...z" constant vector, if we use only "0...9a...f" (and "x" for "0x"). * Post-decrement usually needs a few more instructions, so use pre decrement instead where makes sense: - while (i < precision--) { + while (i <= --precision) { * if base != 10 (=> base 8 or 16), we can avoid using division in a loop and use mask/shift, obtaining much faster conversion. (More complex optimization for base 10 case is in the second patch). Overall, size vsprintf.o shows ~80 bytes smaller text section with this patch applied. Signed-off-by: NDouglas W Jones <jones@cs.uiowa.edu> Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Heiko Carstens 提交于
The current generic bug implementation has a call to dump_stack() in case a WARN_ON(whatever) gets hit. Since report_bug(), which calls dump_stack(), gets called from an exception handler we can do better: just pass the pt_regs structure to report_bug() and pass it to show_regs() in case of a warning. This will give more debug informations like register contents, etc... In addition this avoids some pointless lines that dump_stack() emits, since it includes a stack backtrace of the exception handler which is of no interest in case of a warning. E.g. on s390 the following lines are currently always present in a stack backtrace if dump_stack() gets called from report_bug(): [<000000000001517a>] show_trace+0x92/0xe8) [<0000000000015270>] show_stack+0xa0/0xd0 [<00000000000152ce>] dump_stack+0x2e/0x3c [<0000000000195450>] report_bug+0x98/0xf8 [<0000000000016cc8>] illegal_op+0x1fc/0x21c [<00000000000227d6>] sysc_return+0x0/0x10 Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Acked-by: NHaavard Skinnemoen <hskinnemoen@atmel.com> Cc: Andi Kleen <ak@suse.de> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
This is a rather bizarre thing to have inlined in io.h. Stick it in lib/ instead. While we're there, despaghetti it a bit, and fix its off-by-one behaviour when passed a zero length. Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
Now that we have implemented hotunplug-time counter spilling, percpu_counter_sum() only needs to look at online CPUs. Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
per-cpu counters presently must iterate over all possible CPUs in the exhaustive percpu_counter_sum(). But it can be much better to only iterate over the presently-online CPUs. To do this, we must arrange for an offlined CPU's count to be spilled into the counter's central count. We can do this for all percpu_counters in the machine by linking them into a single global list and walking that list at CPU_DEAD time. (I hope. Might have race windows in which the percpu_counter_sum() count is inaccurate?) Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Lameter 提交于
Add a new configuration variable CONFIG_SLUB_DEBUG_ON If set then the kernel will be booted by default with slab debugging switched on. Similar to CONFIG_SLAB_DEBUG. By default slab debugging is available but must be enabled by specifying "slub_debug" as a kernel parameter. Also add support to switch off slab debugging for a kernel that was built with CONFIG_SLUB_DEBUG_ON. This works by specifying slub_debug=- as a kernel parameter. Dave Jones wanted this feature. http://marc.info/?l=linux-kernel&m=118072189913045&w=2 [akpm@linux-foundation.org: clean up switch statement] Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kristian Hoegsberg 提交于
Remove all ids from the given idr tree. idr_destroy() only frees up unused, cached idp_layers, but this function will remove all id mappings and leave all idp_layers unused. A typical clean-up sequence for objects stored in an idr tree, will use idr_for_each() to free all objects, if necessay, then idr_remove_all() to remove all ids, and idr_destroy() to free up the cached idr_layers. Signed-off-by: NKristian Hoegsberg <krh@redhat.com> Cc: Tejun Heo <htejun@gmail.com> Cc: Dave Airlie <airlied@linux.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kristian Hoegsberg 提交于
This patch adds an iterator function for the idr data structure. Compared to just iterating through the idr with an integer and idr_find, this iterator is (almost, but not quite) linear in the number of elements, as opposed to the number of integers in the range covered by the idr. This makes a difference for sparse idrs, but more importantly, it's a nicer way to iterate through the elements. The drm subsystem is moving to idr for tracking contexts and drawables, and with this change, we can use the idr exclusively for tracking these resources. [akpm@linux-foundation.org: fix comment] Signed-off-by: NKristian Hoegsberg <krh@redhat.com> Cc: Tejun Heo <htejun@gmail.com> Cc: Dave Airlie <airlied@linux.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 7月, 2007 1 次提交
-
-
由 David Chinner 提交于
XFS filestreams functionality uses radix trees and the preload functions. XFS can be built as a module and hence we need radix_tree_preload() exported. radix_tree_preload_end() is a static inline, so it doesn't need exporting. Signed-Off-By: NDave Chinner <dgc@sgi.com> Signed-Off-By: NTim Shimmin <tes@sgi.com>
-
- 12 7月, 2007 5 次提交
-
-
由 Tejun Heo 提交于
As kobj sysfs dentries and inodes are gonna be made reclaimable, dentry can't be used as naming token for sysfs file/directory, replace kobj->dentry with kobj->sd. The only external interface change is shadow directory handling. All other changes are contained in kobj and sysfs. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Tejun Heo 提交于
Implement idr based id allocator. ida is used the same way idr is used but lacks id -> ptr translation and thus consumes much less memory. struct ida_bitmap is attached as leaf nodes to idr tree which is managed by the idr code. Each ida_bitmap is 128bytes long and contains slightly less than a thousand slots. ida is more aggressive with releasing extra resources acquired using ida_pre_get(). After every successful id allocation, ida frees one reserved idr_layer if possible. Reserved ida_bitmap is not freed automatically but only one ida_bitmap is reserved and it's almost always used right away. Under most circumstances, ida won't hold on to memory for too long which isn't actively used. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Tejun Heo 提交于
Separate out idr_mark_full() from sub_alloc() and make marking the allocated slot full the responsibility of idr_get_new_above_int(). Allocation part of idr_get_new_above_int() is renamed to idr_get_empty_slot(). New idr_get_new_above_int() allocates a slot using the function, install the user pointer and marks it full using idr_mark_full(). This change doesn't introduce any behavior change. This will be used by ida. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Tejun Heo 提交于
In sub_alloc(), when bitmap search fails, it goes up one level to continue search. This is done by updating the id cursor and searching the upper level again. If the cursor was at the end of the upper level, we need to go further than that. This wasn't implemented and when that happens the part of the cursor which indexes into the upper level wraps and sub_alloc() ends up searching the wrong bitmap. It allocates id which doesn't match the actual slot. This patch fixes this by restarting from the top if the search needs to go higher than one level. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Kay Sievers 提交于
We get uevents for a bus/class going away, but not one registering. Add the missing uevent in kset_register(), which will send an event for a new bus/class. Suppress all unwanted uevents for bus subdirectories like /bus/*/devices/, /bus/*/drivers/. Now we get for module usbcore: add /module/usbcore (module) add /bus/usb (bus) add /class/usb_host (class) add /bus/usb/drivers/hub (drivers) add /bus/usb/drivers/usb (drivers) remove /bus/usb/drivers/usb (drivers) remove /bus/usb/drivers/hub (drivers) remove /class/usb_host (class) remove /bus/usb (bus) remove /module/usbcore (module) instead of: add /module/usbcore (module) add /bus/usb/drivers/hub (drivers) add /bus/usb/drivers/usb (drivers) remove /bus/usb/drivers/usb (drivers) remove /bus/usb/drivers/hub (drivers) remove /class/usb_host (class) remove /bus/usb/drivers (bus) remove /bus/usb/devices (bus) remove /bus/usb (bus) remove /module/usbcore (module) Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 11 7月, 2007 1 次提交
-
-
由 Richard Purdie 提交于
This is a hybrid version of the patch to add the LZO1X compression algorithm to the kernel. Nitin and myself have merged the best parts of the various patches to form this version which we're both happy with (and are jointly signing off). The performance of this version is equivalent to the original minilzo code it was based on. Bytecode comparisons have also been made on ARM, i386 and x86_64 with favourable results. There are several users of LZO lined up including jffs2, crypto and reiser4 since its much faster than zlib. Signed-off-by: NNitin Gupta <nitingupta910@gmail.com> Signed-off-by: NRichard Purdie <rpurdie@openedhand.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 7月, 2007 1 次提交
-
-
由 Ingo Molnar 提交于
enable CONFIG_SCHED_DEBUG in lib/Kconfig.debug. the runtime overhead of this option is very small. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 6月, 2007 2 次提交
-
-
由 Randy Dunlap 提交于
Add a prefix string parameter. Callers are responsible for any string length/alignment that they want to see in the output. I.e., callers should pad strings to achieve alignment if they want that. Add rowsize parameter. This is the number of raw data bytes to be printed per line. Must be 16 or 32. Add a groupsize parameter. This allows callers to dump values as 1-byte, 2-byte, 4-byte, or 8-byte numbers. Default is 1-byte numbers. If the total length is not an even multiple of groupsize, 1-byte numbers are printed. Add an "ascii" output parameter. This causes ASCII data output following the hex data output. Clean up some doc examples. Align the ASCII output on all lines that are produced by one call. Add a new interface, print_hex_dump_bytes(), that is a shortcut to print_hex_dump(), using default parameter values to print 16 bytes in byte-size chunks of hex + ASCII output, using printk level KERN_DEBUG. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Greg Kroah-Hartman 提交于
Thanks to Jean Delvare <khali@linux-fr.org> for pointing it out to me. Cc: Jean Delvare <khali@linux-fr.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 01 6月, 2007 1 次提交
-
-
由 Ingo Molnar 提交于
Make timer-stats have almost zero overhead when enabled in the config but not used. (this way distros can enable it more easily) Also update the documentation about overhead of timer_stats - it was written for the first version which had a global lock and a linear list walk based lookup ;-) Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 5月, 2007 1 次提交
-
-
由 Paul E. McKenney 提交于
There have been a number of instances where people have accidentally compiled rcutorture into the kernel (CONFIG_RCU_TORTURE_TEST=y), which has never been useful, and has often resulted in great frustration. The attached patch prohibits rcutorture from being compiled into the kernel. It may be excluded altogether or compiled as a module. People wishing to have rcutorture hammer their machine immediately upon boot are free to hand-edit lib/Kconfig.debug to remove the "depends on m" line. Thanks to Randy Dunlap for the trick that makes this work. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NJosh Triplett <josh@kernel.org> Cc: "Randy.Dunlap" <rdunlap@xenotime.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 5月, 2007 1 次提交
-
-
由 Alexey Dobriyan 提交于
First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 5月, 2007 1 次提交
-
-
由 Akinobu Mita 提交于
Disable stacktrace filter support for x86-64 for now. Will be enable when we can get the dwarf2 unwinder back. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Acked-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 5月, 2007 4 次提交
-
-
由 Randy Dunlap 提交于
Based on ace_dump_mem() from Grant Likely for the Xilinx SystemACE CompactFlash interface. Add print_hex_dump() & hex_dumper() to lib/hexdump.c and linux/kernel.h. This patch adds the functions print_hex_dump() & hex_dumper(). print_hex_dump() can be used to perform a hex + ASCII dump of data to syslog, in an easily viewable format, thus providing a common text hex dump format. hex_dumper() provides a dump-to-memory function. It converts one "line" of output (16 bytes of input) at a time. Example usages: print_hex_dump(KERN_DEBUG, DUMP_PREFIX_ADDRESS, frame->data, frame->len); hex_dumper(frame->data, frame->len, linebuf, sizeof(linebuf)); Example output using %DUMP_PREFIX_OFFSET: 0009ab42: 40414243 44454647 48494a4b 4c4d4e4f-@ABCDEFG HIJKLMNO Example output using %DUMP_PREFIX_ADDRESS: ffffffff88089af0: 70717273 74757677 78797a7b 7c7d7e7f-pqrstuvw xyz{|}~. [akpm@linux-foundation.org: cleanups, add export] Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Amy Griffis 提交于
When auditing syscalls that send signals, log the pid and security context for each target process. Optimize the data collection by adding a counter for signal-related rules, and avoiding allocating an aux struct unless we have more than one target process. For process groups, collect pid/context data in blocks of 16. Move the audit_signal_info() hook up in check_kill_permission() so we audit attempts where permission is denied. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Amy Griffis 提交于
Add a syscall class for sending signals. Signed-off-by: NAmy Griffis <amy.griffis@hp.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Ivo van Doorn 提交于
This will add the CRC calculation according to the CRC ITU-T V.41 to the kernel lib/ folder. This code has been derived from the rt2x00 driver, currently found only in the wireless-dev tree, but this library is generic and could be used by more drivers who currently use their own implementation. Signed-off-by: NIvo van Doorn <IvDoorn@gmail.com> Also useful for the new firewire stack. Signed-off-by: NKristian Hoegsberg <krh@redhat.com> Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
-
- 10 5月, 2007 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Since nonboot CPUs are now disabled after tasks and devices have been frozen and the CPU hotplug infrastructure is used for this purpose, we need special CPU hotplug notifications that will help the CPU-hotplug-aware subsystems distinguish normal CPU hotplug events from CPU hotplug events related to a system-wide suspend or resume operation in progress. This patch introduces such notifications and causes them to be used during suspend and resume transitions. It also changes all of the CPU-hotplug-aware subsystems to take these notifications into consideration (for now they are handled in the same way as the corresponding "normal" ones). [oleg@tv-sign.ru: cleanups] Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-