- 27 10月, 2011 3 次提交
-
-
由 Per Forlin 提交于
mmc_core module needs to use setup_fault_attr() in order to set fault injection attributes during module load time. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Reviewed-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Per Forlin 提交于
This adds support to inject data errors after a completed host transfer. The mmc core will return error even though the host transfer is successful. This simple fault injection proved to be very useful to test the non-blocking error handling in the mmc_blk_issue_rw_rq(). Random faults can also test how the host driver handles pre_req() and post_req() in case of errors. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Acked-by: NAkinobu Mita <akinobu.mita@gmail.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NChris Ball <cjb@laptop.org>
-
由 Per Forlin 提交于
Export symbols should_fail() and fault_create_debugfs_attr() in order to let modules utilize the fault injection framework. Signed-off-by: NPer Forlin <per.forlin@linaro.org> Acked-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NChris Ball <cjb@laptop.org>
-
- 20 10月, 2011 1 次提交
-
-
由 Dan McGee 提交于
The files were renamed in commit cc4589eb; fix the name in the file itself. Signed-off-by: NDan McGee <dpmcgee@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 19 10月, 2011 3 次提交
-
-
由 Jason Baron 提交于
Dynamic debug recently added support for netdev_printk. It uses __netdev_printk() to support this functionality. However, when CONFIG_NET is not set, we get the following error: lib/built-in.o: In function `__dynamic_netdev_dbg': (.text+0x9fda): undefined reference to `__netdev_printk' Fix this by making the call to netdev_printk() contingent upon CONFIG_NET. We could have fixed this by defining netdev_printk() to a 'no-op' in the !CONFIG_NET case. However, this is not consistent with how the networking layer uses netdev_printk. For example, CONFIG_NET is not set, netdev_printk() does not have a 'no-op' definition defined. Signed-off-by: NJason Baron <jbaron@redhat.com> Acked-by: NRandy Dunlap <rdunlap@xenotime.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Jason Baron 提交于
We were using KERN_CONT to combine messages with their prefix. However, KERN_CONT is not smp safe, in the sense that it can interleave messages. This interleaving can result in printks coming out at the wrong loglevel. With the high frequency of printks that dynamic debug can produce this is not desirable. So make dynamic_emit_prefix() fill a char buf[64] instead of doing a printk directly. If we enable printing out of function, module, line, or pid info, they are placed in this 64 byte buffer. In my testing 64 bytes was enough size to fulfill all requests. Even if it's not, we can match up the printk itself to see where it's from, so to me this is no big deal. [akpm@linux-foundation.org: convert dangerous macro to C] Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Jason Baron 提交于
The num_enabled accounting isn't actually used anywhere - remove them. Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 06 10月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
This task is preparatory for the migrate_disable() implementation, but stands on its own and provides a cleanup. It currently only converts those sites required for task-placement. Kosaki-san once mentioned replacing cpus_allowed with a proper cpumask_t instead of the NR_CPUS sized array it currently is, that would also require something like this. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Link: http://lkml.kernel.org/n/tip-e42skvaddos99psip0vce41o@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 10月, 2011 5 次提交
-
-
由 Peter Zijlstra 提交于
Initial benchmarks show they're a net loss: $ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ echo 4096 32000 64 128 > /proc/sys/kernel/sem $ ./sembench -t 2048 -w 1900 -o 0 Pre: run time 30 seconds 778936 worker burns per second run time 30 seconds 912190 worker burns per second run time 30 seconds 817506 worker burns per second run time 30 seconds 830870 worker burns per second run time 30 seconds 845056 worker burns per second Post: run time 30 seconds 905920 worker burns per second run time 30 seconds 849046 worker burns per second run time 30 seconds 886286 worker burns per second run time 30 seconds 822320 worker burns per second run time 30 seconds 900283 worker burns per second So about 4% faster. (!) cpu_relax() stalls the pipeline, therefore, when used in a tight loop it has the following benefits: - allows SMT siblings to have a go; - reduces pressure on the CPU interconnect. However, cmpxchg loops are unfair and thus have unbounded completion time, therefore we should avoid getting in such heavily contended situations where the above benefits make any difference. A typical cmpxchg loop should not go round more than a handfull of times at worst, therefore adding extra delays just slows things down. Since the llist primitives are new, there aren't any bad users yet, and we should avoid growing them. Heavily contended sites should generally be better off using the ticket locks for serialization since they provide bounded completion times (fifo-fair over the cpus). Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1315836358.26517.43.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Huang Ying 提交于
Extend the llist_add*() functions to return a success indicator, this allows us in the scheduler code to send an IPI if the queue was empty. ( There's no effect on existing users, because the list_add_xxx() functions are inline, thus this will be optimized out by the compiler if not used by callers. ) Signed-off-by: NHuang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-5-git-send-email-ying.huang@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Huang Ying 提交于
If in llist_add()/etc. functions the first cmpxchg() call succeeds, it is not necessary to use cpu_relax() before the cmpxchg(). So cpu_relax() in a busy loop involving cmpxchg() should go after cmpxchg() instead of before that. This patch fixes this for all involved llist functions. Signed-off-by: NHuang Ying <ying.huang@intel.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-4-git-send-email-ying.huang@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Remove the nmi() checks spread around the code. in_nmi() is not available on every architecture and it's a pretty obscure and ugly check in any case. Cc: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-3-git-send-email-ying.huang@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Huang Ying 提交于
Because llist code will be used in performance critical scheduler code path, make llist_add() and llist_del_all() inline to avoid function calling overhead and related 'glue' overhead. Signed-off-by: NHuang Ying <ying.huang@intel.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-2-git-send-email-ying.huang@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 10月, 2011 1 次提交
-
-
由 Arnd Bergmann 提交于
Thumb2 kernels cannot be built with frame pointers, but can use the ARM_UNWIND feature for unwinding instead. This makes sure that all features that rely on unwinding includeing CONFIG_LATENCYTOP and FAULT_INJECTION_STACKTRACE_FILTER do not enable frame pointers when the unwinder is already selected, and we always build with the unwinder when we want a thumb2 kernel, to make sure we do not get the frame pointers instead. A different option would be to redefine the CONFIG_FRAME_POINTERS option on ARM to mean builing with either frame pointers or the unwinder, and then select which one to use based on the CPU architecture or another user option. That would still allow building thumb2 kernels without the unwinder but would also be more confusing. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 22 9月, 2011 1 次提交
-
-
由 Lasse Collin 提交于
xz_dec_run() could incorrectly return XZ_BUF_ERROR if all of the following was true: - The caller knows how many bytes of output to expect and only provides that much output space. - When the last output bytes are decoded, the caller-provided input buffer ends right before the LZMA2 end of payload marker. So LZMA2 won't provide more output anymore, but it won't know it yet and thus won't return XZ_STREAM_END yet. - A BCJ filter is in use and it hasn't left any unfiltered bytes in the temp buffer. This can happen with any BCJ filter, but in practice it's more likely with filters other than the x86 BCJ. This fixes <https://bugzilla.redhat.com/show_bug.cgi?id=735408> where Squashfs thinks that a valid file system is corrupt. This also fixes a similar bug in single-call mode where the uncompressed size of a block using BCJ + LZMA2 was 0 bytes and caller provided no output space. Many empty .xz files don't contain any blocks and thus don't trigger this bug. This also tweaks a closely related detail: xz_dec_bcj_run() could call xz_dec_lzma2_run() to decode into temp buffer when it was known to be useless. This was harmless although it wasted a minuscule number of CPU cycles. Signed-off-by: NLasse Collin <lasse.collin@tukaani.org> Cc: stable <stable@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 9月, 2011 1 次提交
-
-
由 Mimi Zohar 提交于
hex2bin converts a hexadecimal string to its binary representation. The original version of hex2bin did not do any error checking. This patch adds error checking and returns the result. Changelog v1: - removed unpack_hex_byte() - changed return code from boolean to int Changelog: - use the new unpack_hex_byte() - add __must_check compiler option (Andy Shevchenko's suggestion) - change function API to return error checking result (based on Tetsuo Handa's initial patch) Signed-off-by: NMimi Zohar <zohar@linux.vnet.ibm.com> Acked-by: NAndy Shevchenko <andy.shevchenko@gmail.com>
-
- 15 9月, 2011 2 次提交
-
-
由 Jesper Juhl 提交于
This patch removes an unneeded include of linux/version.h from lib/dynamic_debug.c - identified by 'make versioncheck'. This is the only file in lib/ with this issue. Signed-off-by: NJesper Juhl <jj@chaosbits.net> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Michael Witten 提交于
Signed-off-by: NMichael Witten <mfwitten@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 14 9月, 2011 2 次提交
-
-
由 Yong Zhang 提交于
There are still some leftovers of commit f59ca058 [locking, lib/atomic64: Annotate atomic64_lock::lock as raw] [ tglx: Seems I picked the wrong version of that patch :( ] Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shan Hai <haishan.bai@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20110914074924.GA16096@zhySigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 H Hartley Sweeten 提交于
Include <linux/cryptohash.h> to pickup the declarations for sha_transform and sha_init to quite the sparse noise: warning: symbol 'sha_transform' was not declared. Should it be static? warning: symbol 'sha_init' was not declared. Should it be static? Signed-off-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NMandeep Singh Baines <msb@chromium.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 9月, 2011 5 次提交
-
-
由 Shan Hai 提交于
The spinlock protected atomic64 operations must be irq safe as they are used in hard interrupt context and cannot be preempted on -rt: NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8 LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 Call Trace: [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable) [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98 [eb459c40] [c03d2a14] atomic64_read+0x48/0x84 [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150 [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c So annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NShan Hai <haishan.bai@gmail.com> Reviewed-by: NYong Zhang <yong.zhang0@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
There is no reason to allow the lock protecting rwsems (the ownerless variant) to be preemptible on -rt. Convert it to raw. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
The logbuf_lock lock can be taken in atomic context and therefore cannot be preempted on -rt - annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> [ merged and fixed it ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
The prop_local_percpu::lock can be taken in atomic context and therefore cannot be preempted on -rt - annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
The percpu_counter::lock can be taken in atomic context and therefore cannot be preempted on -rt - annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 8月, 2011 1 次提交
-
-
由 Geert Uytterhoeven 提交于
If there are no builtin users of find_next_bit_le() and find_next_zero_bit_le(), these functions are not present in the kernel image, causing m68k allmodconfig to fail with: ERROR: "find_next_zero_bit_le" [fs/ufs/ufs.ko] undefined! ERROR: "find_next_bit_le" [fs/udf/udf.ko] undefined! ... This started to happen after commit 171d809d ("m68k: merge mmu and non-mmu bitops.h"), as m68k had its own inline versions before. commit 63e424c8 ("arch: remove CONFIG_GENERIC_FIND_{NEXT_BIT, BIT_LE, LAST_BIT}") added find_last_bit.o to obj-y (so it's always included), but find_next_bit.o to lib-y (so it gets removed by the linker if there are no builtin users). Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 8月, 2011 7 次提交
-
-
由 Justin P. Mattock 提交于
The patch below removes an extra "it" in the comment. Signed-off-by: NJustin P. Mattock <justinmattock@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Milan Broz 提交于
kobject_uevent() uses a multicast socket and should ignore if one of listeners cannot handle messages or nobody is listening at all. Easily reproducible when a process in system is cloned with CLONE_NEWNET flag. (See also http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/5256) Signed-off-by: NMilan Broz <mbroz@redhat.com> Acked-by: NKay Sievers <kay.sievers@vrfy.org> Cc: stable <stable@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Jason Baron 提交于
Previously, if dynamic debug was enabled netdev_dbg() was using dynamic_dev_dbg() to print out the underlying msg. Fix this by making sure netdev_dbg() uses __netdev_printk(). Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Joe Perches 提交于
Add pr_fmt(fmt) with __func__. Converts "ddebug:" prefix to "dynamic_debug:". Most likely the if (verbose) outputs could also be converted from pr_info to pr_debug. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Joe Perches 提交于
Multiple printks with KERN_CONT can be interleaved by other printks. Reduce the likelihood of that interleaving by consolidating multiple calls to printk. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Joe Perches 提交于
Adding dynamic_dev_dbg duplicated prefix output. Consolidate that output to a single routine. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Joe Perches 提交于
Unlike dynamic_pr_debug, dynamic uses of dev_dbg can not currently add task_pid/KBUILD_MODNAME/__func__/__LINE__ to selected debug output. Add a new function similar to dynamic_pr_debug to optionally emit these prefixes. Cc: Aloisio Almeida <aloisio.almeida@openbossa.org> Noticed-by: NAloisio Almeida <aloisio.almeida@openbossa.org> Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NJason Baron <jbaron@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 07 8月, 2011 2 次提交
-
-
由 David S. Miller 提交于
We are going to use this for TCP/IP sequence number and fragment ID generation. Signed-off-by: NDavid S. Miller <davem@davemloft.net> -
由 Mandeep Singh Baines 提交于
For ChromiumOS, we use SHA-1 to verify the integrity of the root filesystem. The speed of the kernel sha-1 implementation has a major impact on our boot performance. To improve boot performance, we investigated using the heavily optimized sha-1 implementation used in git. With the git sha-1 implementation, we see a 11.7% improvement in boot time. 10 reboots, remove slowest/fastest. Before: Mean: 6.58 seconds Stdev: 0.14 After (with git sha-1, this patch): Mean: 5.89 seconds Stdev: 0.07 The other cool thing about the git SHA-1 implementation is that it only needs 64 bytes of stack for the workspace while the original kernel implementation needed 320 bytes. Signed-off-by: NMandeep Singh Baines <msb@chromium.org> Cc: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Cc: Nicolas Pitre <nico@cam.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 8月, 2011 5 次提交
-
-
由 Paul Bolle 提交于
Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Hugh Dickins 提交于
We have already acknowledged that swapoff of a tmpfs file is slower than it was before conversion to the generic radix_tree: a little slower there will be acceptable, if the hotter paths are faster. But it was a shock to find swapoff of a 500MB file 20 times slower on my laptop, taking 10 minutes; and at that rate it significantly slows down my testing. Now, most of that turned out to be overhead from PROVE_LOCKING and PROVE_RCU: without those it was only 4 times slower than before; and more realistic tests on other machines don't fare as badly. I've tried a number of things to improve it, including tagging the swap entries, then doing lookup by tag: I'd expected that to halve the time, but in practice it's erratic, and often counter-productive. The only change I've so far found to make a consistent improvement, is to short-circuit the way we go back and forth, gang lookup packing entries into the array supplied, then shmem scanning that array for the target entry. Scanning in place doubles the speed, so it's now only twice as slow as before (or three times slower when the PROVEs are on). So, add radix_tree_locate_item() as an expedient, once-off, single-caller hack to do the lookup directly in place. #ifdef it on CONFIG_SHMEM and CONFIG_SWAP, as much to document its limited applicability as save space in other configurations. And, sadly, #include sched.h for cond_resched(). Signed-off-by: NHugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hugh Dickins 提交于
A patchset to extend tmpfs to MAX_LFS_FILESIZE by abandoning its peculiar swap vector, instead keeping a file's swap entries in the same radix tree as its struct page pointers: thus saving memory, and simplifying its code and locking. This patch: The radix_tree is used by several subsystems for different purposes. A major use is to store the struct page pointers of a file's pagecache for memory management. But what if mm wanted to store something other than page pointers there too? The low bit of a radix_tree entry is already used to denote an indirect pointer, for internal use, and the unlikely radix_tree_deref_retry() case. Define the next bit as denoting an exceptional entry, and supply inline functions radix_tree_exception() to return non-0 in either unlikely case, and radix_tree_exceptional_entry() to return non-0 in the second case. If a subsystem already uses radix_tree with that bit set, no problem: it does not affect internal workings at all, but is defined for the convenience of those storing well-aligned pointers in the radix_tree. The radix_tree_gang_lookups have an implicit assumption that the caller can deduce the offset of each entry returned e.g. by the page->index of a struct page. But that may not be feasible for some kinds of item to be stored there. radix_tree_gang_lookup_slot() allow for an optional indices argument, output array in which to return those offsets. The same could be added to other radix_tree_gang_lookups, but for now keep it to the only one for which we need it. Signed-off-by: NHugh Dickins <hughd@google.com> Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rusty Russell 提交于
The current hyper-optimized functions are overkill if you simply want to allocate an id for a device. Create versions which use an internal lock. In followup patches, numerous drivers are converted to use this interface. Thanks to Tejun for feedback. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NTejun Heo <tj@kernel.org> Acked-by: NJonathan Cameron <jic23@cam.ac.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
init_fault_attr_dentries() is used to export fault_attr via debugfs. But it can only export it in debugfs root directory. Per Forlin is working on mmc_fail_request which adds support to inject data errors after a completed host transfer in MMC subsystem. The fault_attr for mmc_fail_request should be defined per mmc host and export it in debugfs directory per mmc host like /sys/kernel/debug/mmc0/mmc_fail_request. init_fault_attr_dentries() doesn't help for mmc_fail_request. So this introduces fault_create_debugfs_attr() which is able to create a directory in the arbitrary directory and replace init_fault_attr_dentries(). [akpm@linux-foundation.org: extraneous semicolon, per Randy] Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Tested-by: NPer Forlin <per.forlin@linaro.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Matt Mackall <mpm@selenic.com> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-