- 20 8月, 2010 3 次提交
-
-
由 Arnd Bergmann 提交于
Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Nick Piggin <npiggin@suse.de> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
由 Paul E. McKenney 提交于
Also set the default to 60 seconds, up from the previous hard-coded timeout of 10 seconds. This allows people who care to set short timeouts, while avoiding people with unusual configurations (make randconfig!!!) from being bothered with spurious CPU stall warnings. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
由 Paul E. McKenney 提交于
This commit provides definitions for the __rcu annotation defined earlier. This annotation permits sparse to check for correct use of RCU-protected pointers. If a pointer that is annotated with __rcu is accessed directly (as opposed to via rcu_dereference(), rcu_assign_pointer(), or one of their variants), sparse can be made to complain. To enable such complaints, use the new default-disabled CONFIG_SPARSE_RCU_POINTER kernel configuration option. Please note that these sparse complaints are intended to be a debugging aid, -not- a code-style-enforcement mechanism. There are special rcu_dereference_protected() and rcu_access_pointer() accessors for use when RCU read-side protection is not required, for example, when no other CPU has access to the data structure in question or while the current CPU hold the update-side lock. This patch also updates a number of docbook comments that were showing their age. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Christopher Li <sparse@chrisli.org> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 17 8月, 2010 1 次提交
-
-
由 Randy Dunlap 提交于
warning: (LATENCYTOP && HAVE_LATENCYTOP_SUPPORT) selects SCHED_DEBUG which has unmet direct dependencies (DEBUG_KERNEL && PROC_FS) warning: (LATENCYTOP && HAVE_LATENCYTOP_SUPPORT) selects SCHEDSTATS which has unmet direct dependencies (DEBUG_KERNEL && PROC_FS) Add depends on STACKTRACE_SUPPORT for 'select STACKTRACE'. Add depends on PROC_FS since that is where the output goes. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Arjan van de Ven <arjan@linux.intel.com> LKML-Reference: <20100812123121.a7c99cde.randy.dunlap@oracle.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 8月, 2010 1 次提交
-
-
由 David Howells 提交于
Don't try and #include <linux/slab.h> in lib/inflate.c from the bootloader code as linux/slab.h hauls in function defs that aren't available in the bootloader code and may also haul in conflicting functions. To fix this, make the inclusion of linux/slab.h contingent on NO_INFLATE_MALLOC as are the usages of kmalloc() and kfree(). In MN10300, this causes the following errors: In file included from include/linux/string.h:21, from include/linux/bitmap.h:8, from include/linux/nodemask.h:93, from include/linux/mmzone.h:16, from include/linux/gfp.h:4, from include/linux/slab.h:12, from arch/mn10300/boot/compressed/../../../../lib/inflate.c:106, from arch/mn10300/boot/compressed/misc.c:170: /warthog/am33/linux-2.6-mn10300/arch/mn10300/include/asm/string.h:19: error: conflicting types for 'memset' arch/mn10300/boot/compressed/misc.c:59: error: previous definition of 'memset' was here Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 8月, 2010 2 次提交
-
-
由 NeilBrown 提交于
Rename raid6/raid6x86.h to raid6/x86.h and modify some comments. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Some bit-rot needs to be cleaned out. Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 11 8月, 2010 4 次提交
-
-
由 Prarit Bhargava 提交于
Fix checkstack error: lib/decompress_bunzip2.c: In function `get_next_block': lib/decompress_bunzip2.c:511: warning: the frame size of 1932 bytes is larger than 1024 bytes byteCount, symToByte, and mtfSymbol cannot be declared static or allocated dynamically so place them in the bunzip_data struct. Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Cc: Phillip Lougher <phillip@lougher.demon.co.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anton Blanchard 提交于
We are missing the oops end marker for the exception based WARN implementation in lib/bug.c. This is useful for logfile analysis tools. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Arjan van de Ven <arjan@infradead.org> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anton Blanchard 提交于
There are a few issues with the exception based WARN implementation in lib/bug.c: - Inconsistent printk flags. The "cut here" line is printed at KERN_EMERG, so the console and all logged in users see the single line: ------------[ cut here ]------------ for each WARN. Fix this so we print everything at KERN_WARNING to match the kernel/panic.c version. - The lib/bug.c WARN would print "Badness at". Change it to match the kernel/panic.c version which prints "WARNING: at". - Print the list of modules, similar to kernel/panic.c of modules, similar to kernel/panic.c [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Arjan van de Ven <arjan@infradead.org> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Woodhouse 提交于
Linus asks 'why "raid6" twice?'. No reason. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 10 8月, 2010 11 次提交
-
-
由 Michel Lespinasse 提交于
More code can be pushed from rwsem_down_read_failed and rwsem_down_write_failed into rwsem_down_failed_common. Following change adding down_read_critical infrastructure support also enjoys having flags available in a register rather than having to fish it out in the struct rwsem_waiter... Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This change addresses the following situation: - Thread A acquires the rwsem for read - Thread B tries to acquire the rwsem for write, notices there is already an active owner for the rwsem. - Thread C tries to acquire the rwsem for read, notices that thread B already tried to acquire it. - Thread C grabs the spinlock and queues itself on the wait queue. - Thread B grabs the spinlock and queues itself behind C. At this point A is the only remaining active owner on the rwsem. In this situation thread B could notice that it was the last active writer on the rwsem, and decide to wake C to let it proceed in parallel with A since they both only want the rwsem for read. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
Previously each waiting thread added a bias of RWSEM_WAITING_BIAS. With this change, the bias is added only once to indicate that the wait list is non-empty. This has a few nice properties which will be used in following changes: - when the spinlock is held and the waiter list is known to be non-empty, count < RWSEM_WAITING_BIAS <=> there is an active writer on that sem - count == RWSEM_WAITING_BIAS <=> there are waiting threads and no active readers/writers on that sem Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
In __rwsem_do_wake(), we can skip the active count check unless we come there from up_xxxx(). Also when checking the active count, it is not actually necessary to increment it; this allows us to get rid of the read side undo code and simplify the calculation of the final rwsem count adjustment once we've counted the reader threads to wake. The basic observation is the following. When there are waiter threads on a rwsem and the spinlock is held, other threads can only increment the active count by trying to grab the rwsem in down_xxxx(). However down_xxxx() will notice there are waiter threads and take the down_failed path, blocking to acquire the spinlock on the way there. Therefore, a thread observing an active count of zero with waiters queued and the spinlock held, is protected against other threads acquiring the rwsem until it wakes the last waiter or releases the spinlock. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
This is in preparation for later changes in the series. In __rwsem_do_wake(), the first queued waiter is checked first in order to determine whether it's a writer or a reader. The code paths diverge at this point. The code that checks and increments the rwsem active count is duplicated on both sides - the point is that later changes in the series will be able to independently modify both sides. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: Mike Waychison <mikew@google.com> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Ying Han <yinghan@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Eric Paris 提交于
Getting and putting arrays of pointers with flex arrays is a PITA. You have to remember to pass &ptr to the _put and you have to do weird and wacky casting to get the ptr back from the _get. Add two functions flex_array_get_ptr() and flex_array_put_ptr() to handle all of the magic. [akpm@linux-foundation.org: simplification suggested by Joe] Signed-off-by: NEric Paris <eparis@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Joe Perches <joe@perches.com> Cc: James Morris <jmorris@namei.org> Cc: Joe Perches <joe@perches.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Nazarewicz 提交于
The strict_strtoul() and strict_strtoull() functions used strlen() to check argument's length in a situation where it wasn't strictly necessary Signed-off-by: NMichal Nazarewicz <mina86@mina86.com> Cc: "Yi Yang" <yi.y.yang@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Baruch Siach 提交于
Use the magic LIST_POISON* values to detect an incorrect use of list_del on a deleted entry. This DEBUG_LIST specific warning is easier to understand than the generic Oops message caused by LIST_POISON dereference. Signed-off-by: NBaruch Siach <baruch@tkos.co.il> Cc: Dave Jones <davej@codemonkey.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Anton Blanchard 提交于
A profile of a network benchmark showed iommu_num_pages rather high up: 0.52% iommu_num_pages Looking at the profile, an integer divide is taking almost all of the time: % : c000000000376ea4 <.iommu_num_pages>: 1.93 : c000000000376ea4: fb e1 ff f8 std r31,-8(r1) 0.00 : c000000000376ea8: f8 21 ff c1 stdu r1,-64(r1) 0.00 : c000000000376eac: 7c 3f 0b 78 mr r31,r1 3.86 : c000000000376eb0: 38 84 ff ff addi r4,r4,-1 0.00 : c000000000376eb4: 38 05 ff ff addi r0,r5,-1 0.00 : c000000000376eb8: 7c 84 2a 14 add r4,r4,r5 46.95 : c000000000376ebc: 7c 00 18 38 and r0,r0,r3 45.66 : c000000000376ec0: 7c 84 02 14 add r4,r4,r0 0.00 : c000000000376ec4: 7c 64 2b 92 divdu r3,r4,r5 0.00 : c000000000376ec8: 38 3f 00 40 addi r1,r31,64 0.00 : c000000000376ecc: eb e1 ff f8 ld r31,-8(r1) 1.61 : c000000000376ed0: 4e 80 00 20 blr Since every caller of iommu_num_pages passes in a constant power of two we can inline this such that the divide is replaced by a shift. The entire function is only a few instructions once optimised, so it is a good candidate for inlining overall. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Kara 提交于
Implement function for setting one tag if another tag is set for each item in given range. Signed-off-by: NJan Kara <jack@suse.cz> Cc: Dave Chinner <david@fromorbit.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Chris Mason <chris.mason@oracle.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tim Chen 提交于
Add percpu_counter_compare that allows for a quick but accurate comparison of percpu_counter with a given value. A rough count is provided by the count field in percpu_counter structure, without accounting for the other values stored in individual cpu counters. The actual count is a sum of count and the cpu counters. However, count field is never different from the actual value by a factor of batch*num_online_cpu. We do not need to get actual count for comparison if count is different from the given value by this factor and allows for quick comparison without summing up all the per cpu counters. Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 8月, 2010 1 次提交
-
-
由 Randy Dunlap 提交于
debugfs no longer uses 'kernel_subsys' (which is gone), and other kernel/ksysfs.c code is always built, so DEBUG_FS does not need to depend on SYSFS. Fixes this kconfig warning: warning: (TREE_RCU_TRACE || AMD_IOMMU_STATS && AMD_IOMMU || MTD_UBI_DEBUG && MTD && SYSFS && MTD_UBI || UBIFS_FS_DEBUG && MISC_FILESYSTEMS && UBIFS_FS || DEBUG_KMEMLEAK && DEBUG_KERNEL && EXPERIMENTAL && !MEMORY_HOTPLUG && (X86 || ARM || PPC || S390 || SPARC64 || SUPERH || MICROBLAZE) && SYSFS || TRACING || X86_PTDUMP && DEBUG_KERNEL || BLK_DEV_IO_TRACE && TRACING_SUPPORT && FTRACE && SYSFS && BLOCK) selects DEBUG_FS which has unmet direct dependencies (SYSFS) Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 04 8月, 2010 1 次提交
-
-
由 Michal Simek 提交于
Microblaze doesn't support frame pointers. Ftrace code uses CALLER_ADDR1 which is defined in linux/ftrace.h. For Microblaze is 0. Signed-off-by: NMichal Simek <monstr@monstr.eu>
-
- 29 7月, 2010 1 次提交
-
-
由 Chris Wilson 提交于
kmemleak ignores page_alloc() and so believes the final sub-page allocation using the plain kmalloc is decoupled and lost. This leads to lots of false-positives with code that uses scatterlists. The options seem to be either to tell kmemleak that the kmalloc is not leaked or to notify kmemleak of the page allocations. The danger of the first approach is that we may hide a real leak, so choose the latter approach (of which I am not sure of the downsides). v2: Added comments on the suggestion of Catalin. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <jaxboe@fusionio.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 7月, 2010 1 次提交
-
-
由 Will Deacon 提交于
ARM has support for the atomic64_dec_if_positive operation so ensure that it is tested by the atomic64_test routine. Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 7月, 2010 1 次提交
-
-
由 Takuya Yoshikawa 提交于
The last 't' of 'fault' is missing in the description of FAIL_IO_TIMEOUT. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 19 7月, 2010 1 次提交
-
-
由 Jason Baron 提交于
Introduce a new DEBUG_KMEMLEAK_DEFAULT_OFF config parameter that allows kmemleak to be disabled by default, but enabled on the command line via: kmemleak=on. Although a reboot is required to turn it on, its still useful to not require a re-compile. Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
-
- 14 7月, 2010 2 次提交
-
-
由 Andi Kleen 提交于
Newer gcc has a -femit-struct-debug-baseonly option that dramatically reduces the size of object files with debug info. What it does is to only emit type information for structures when the structures are defined in the same file or in a header file. This means the type information for most headers are not included. This is not good when the type information is actually needed (e.g. with kgdb or systemtap) But often kernel hackers only care about line numbers and don't need all the type information anyways. In this case setting the option can be a big win: A build dir for a specific x86-64 configuration with gcc 4.5 shrunk from 2.3G to 1.2G. The compilation was also nearly a minute faster. Signed-off-by: NAndi Kleen <ak@linux.intel.com> [mmarek: reformatted help text] Signed-off-by: NMichal Marek <mmarek@suse.cz>
-
由 Yinghai Lu 提交于
via following scripts FILES=$(find * -type f | grep -vE 'oprofile|[^K]config') sed -i \ -e 's/lmb/memblock/g' \ -e 's/LMB/MEMBLOCK/g' \ $FILES for N in $(find . -name lmb.[ch]); do M=$(echo $N | sed 's/lmb/memblock/g') mv $N $M done and remove some wrong change like lmbench and dlmb etc. also move memblock.c from lib/ to mm/ Suggested-by: NIngo Molnar <mingo@elte.hu> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 7月, 2010 1 次提交
-
-
由 Kulikov Vasiliy 提交于
'Unamp' should be 'Unmap'. Signed-off-by: NKulikov Vasiliy <segooon@gmail.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 10 7月, 2010 1 次提交
-
-
由 Kenji Kaneshige 提交于
Current x86 ioremap() doesn't handle physical address higher than 32-bit properly in X86_32 PAE mode. When physical address higher than 32-bit is passed to ioremap(), higher 32-bits in physical address is cleared wrongly. Due to this bug, ioremap() can map wrong address to linear address space. In my case, 64-bit MMIO region was assigned to a PCI device (ioat device) on my system. Because of the ioremap()'s bug, wrong physical address (instead of MMIO region) was mapped to linear address space. Because of this, loading ioatdma driver caused unexpected behavior (kernel panic, kernel hangup, ...). Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> LKML-Reference: <4C1AE680.7090408@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 05 7月, 2010 3 次提交
-
-
由 Peter Zijlstra 提交于
Reimplement augmented RB-trees without sprinkling extra branches all over the RB-tree code (which lives in the scheduler hot path). This approach is 'borrowed' from Fabio's BFQ implementation and relies on traversing the rebalance path after the RB-tree-op to correct the heap property for insertion/removal and make up for the damage done by the tree rotations. For insertion the rebalance path is trivially that from the new node upwards to the root, for removal it is that from the deepest node in the path from the to be removed node that will still be around after the removal. [ This patch also fixes a video driver regression reported by Ali Gholami Rudi - the memtype->subtree_max_end was updated incorrectly. ] Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: NAli Gholami Rudi <ali@rudi.ir> Cc: Fabio Checconi <fabio@gandalf.sssup.it> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1275414172.27810.27961.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yehuda Sadeh 提交于
We should initialize the module dynamic debug datastructures only after determining that the module is not loaded yet. This fixes a bug that introduced in 2.6.35-rc2, where when a trying to load a module twice, we also load it's dynamic printing data twice which causes all sorts of nasty issues. Also handle the dynamic debug cleanup later on failure. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (removed a #ifdef) Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joe Perches 提交于
Add the ability to print a format and va_list from a structure pointer Allows __dev_printk to be implemented as a single printk while minimizing string space duplication. %pV should not be used without some mechanism to verify the format and argument use ala __attribute__(format (printf(...))). Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2010 1 次提交
-
-
由 Imre Deak 提交于
bitmap_find_next_zero_area requires the size of the bitmap, we instead passed the last suitable position. This made it impossible to allocate from the end of the pool. Fixes a regression introduced by 243797f5 ("genalloc: use bitmap_find_next_zero_area"). Signed-off-by: NImre Deak <imre.deak@nokia.com> Cc: Zygo Blaxell <zygo.blaxell@xandros.com> Cc: Tejun Heo <tj@kernel.org> Acked-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 6月, 2010 1 次提交
-
-
由 Paul E. McKenney 提交于
Convert to rcu_dereference_raw() given that many callers may have many different locking models. Located-by: NMiles Lane <miles.lane@gmail.com> Tested-by: NMiles Lane <miles.lane@gmail.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 17 6月, 2010 1 次提交
-
-
由 Uwe Kleine-König 提交于
Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 15 6月, 2010 1 次提交
-
-
由 Mathieu Desnoyers 提交于
Helps finding racy users of call_rcu(), which results in hangs because list entries are overwritten and/or skipped. Changelog since v4: - Bissectability is now OK - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to call_rcu(). Statically initialized objects are detected with object_is_static(). - Rename rcu_head_init_on_stack to init_rcu_head_on_stack. - Remove init_rcu_head() completely. Changelog since v3: - Include comments from Lai Jiangshan This new patch version is based on the debugobjects with the newly introduced "active state" tracker. Non-initialized entries are all considered as "statically initialized". An activation fixup (triggered by call_rcu()) takes care of performing the debug object initialization without issuing any warning. Since we cannot increase the size of struct rcu_head, I don't see much room to put an identifier for statically initialized rcu_head structures. So for now, we have to live without "activation without explicit init" detection. But the main purpose of this debug option is to detect double-activations (double call_rcu() use of a rcu_head before the callback is executed), which is correctly addressed here. This also detects potential internal RCU callback corruption, which would cause the callbacks to be executed twice. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> CC: David S. Miller <davem@davemloft.net> CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> CC: akpm@linux-foundation.org CC: mingo@elte.hu CC: laijs@cn.fujitsu.com CC: dipankar@in.ibm.com CC: josh@joshtriplett.org CC: dvhltc@us.ibm.com CC: niv@us.ibm.com CC: tglx@linutronix.de CC: peterz@infradead.org CC: rostedt@goodmis.org CC: Valdis.Kletnieks@vt.edu CC: dhowells@redhat.com CC: eric.dumazet@gmail.com CC: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NLai Jiangshan <laijs@cn.fujitsu.com>
-
- 07 6月, 2010 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
We put the functions dealing with the operations on the SWIOTLB buffer in the header and make those functions non-static. And also make the functions exported via EXPORT_SYMBOL_GPL. See "swiotlb: swiotlb: add swiotlb_tbl_map_single library function" for full description of patchset. [v2: swiotlb_sync_single_range_for_* no more. Remove usage.] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NAlbert Herranz <albert_herranz@yahoo.es>
-