- 27 9月, 2013 1 次提交
-
-
由 Eric W. Biederman 提交于
In kobj_ns_current_may_mount the default should be to allow the mount. The test is only for a single kobj_ns_type at a time, and unless there is a reason to prevent it the mounting sysfs should be allowed. Subsystems that are not registered can't have are not involved so can't have a reason to prevent mounting sysfs. This is a bug-fix to: commit 7dc5dbc8 Author: Eric W. Biederman <ebiederm@xmission.com> Date: Mon Mar 25 20:07:01 2013 -0700 sysfs: Restrict mounting sysfs Don't allow mounting sysfs unless the caller has CAP_SYS_ADMIN rights over the net namespace. The principle here is if you create or have capabilities over it you can mount it, otherwise you get to live with what other people have mounted. Instead of testing this with a straight forward ns_capable call, perform this check the long and torturous way with kobject helpers, this keeps direct knowledge of namespaces out of sysfs, and preserves the existing sysfs abstractions. Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> That came in via the userns tree during the 3.12 merge window. Reported-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 9月, 2013 1 次提交
-
-
由 Will Deacon 提交于
The cmpxchg() function tends not to support 64-bit arguments on 32-bit architectures. This could be either due to use of unsigned long arguments (like on ARM) or lack of instruction support (cmpxchgq on x86). However, these architectures may implement a specific cmpxchg64() function to provide 64-bit cmpxchg support instead. Since the lockref code requires a 64-bit cmpxchg and relies on the architecture selecting ARCH_USE_CMPXCHG_LOCKREF, move to using cmpxchg64 instead of cmpxchg and allow 32-bit architectures to make use of the lockless lockref implementation. Cc: Waiman Long <Waiman.Long@hp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 9月, 2013 1 次提交
-
-
由 Martin Schwidefsky 提交于
After the last architecture switched to generic hard irqs the config options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code for !CONFIG_GENERIC_HARDIRQS can be removed. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 9月, 2013 11 次提交
-
-
由 Herbert Xu 提交于
Unfortunately, even with a softdep some distros fail to include the necessary modules in the initrd. Therefore this patch adds a fallback path to restore existing behaviour where we cannot load the new crypto crct10dif algorithm. In order to do this, the underlying crct10dif has been split out from the crypto implementation so that it can be used on the fallback path. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sergey Senozhatsky 提交于
LZ4 compression and decompression functions require different in signedness input/output parameters: unsigned char for compression and signed char for decompression. Change decompression API to require "(const) unsigned char *". Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Yann Collet <yann.collet.73@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Kara 提交于
With users of radix_tree_preload() run from interrupt (block/blk-ioc.c is one such possible user), the following race can happen: radix_tree_preload() ... radix_tree_insert() radix_tree_node_alloc() if (rtp->nr) { ret = rtp->nodes[rtp->nr - 1]; <interrupt> ... radix_tree_preload() ... radix_tree_insert() radix_tree_node_alloc() if (rtp->nr) { ret = rtp->nodes[rtp->nr - 1]; And we give out one radix tree node twice. That clearly results in radix tree corruption with different results (usually OOPS) depending on which two users of radix tree race. We fix the problem by making radix_tree_node_alloc() always allocate fresh radix tree nodes when in interrupt. Using preloading when in interrupt doesn't make sense since all the allocations have to be atomic anyway and we cannot steal nodes from process-context users because some users rely on radix_tree_insert() succeeding after radix_tree_preload(). in_interrupt() check is somewhat ugly but we cannot simply key off passed gfp_mask as that is acquired from root_gfp_mask() and thus the same for all preload users. Another part of the fix is to avoid node preallocation in radix_tree_preload() when passed gfp_mask doesn't allow waiting. Again, preallocation in such case doesn't make sense and when preallocation would happen in interrupt we could possibly leak some allocated nodes. However, some users of radix_tree_preload() require following radix_tree_insert() to succeed. To avoid unexpected effects for these users, radix_tree_preload() only warns if passed gfp mask doesn't allow waiting and we provide a new function radix_tree_maybe_preload() for those users which get different gfp mask from different call sites and which are prepared to handle radix_tree_insert() failure. Signed-off-by: NJan Kara <jack@suse.cz> Cc: Jens Axboe <jaxboe@fusionio.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cody P Schafer 提交于
No reason require rbtree test code to be a module, allow it to be builtin (streamlines my development process) Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Reviewed-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cody P Schafer 提交于
Just check that we examine all nodes in the tree for the postorder iteration. Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Reviewed-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cody P Schafer 提交于
Postorder iteration yields all of a node's children prior to yielding the node itself, and this particular implementation also avoids examining the leaf links in a node after that node has been yielded. In what I expect will be its most common usage, postorder iteration allows the deletion of every node in an rbtree without modifying the rbtree nodes (no _requirement_ that they be nulled) while avoiding referencing child nodes after they have been "deleted" (most commonly, freed). I have only updated zswap to use this functionality at this point, but numerous bits of code (most notably in the filesystem drivers) use a hand rolled postorder iteration that NULLs child links as it traverses the tree. Each of those instances could be replaced with this common implementation. 1 & 2 add rbtree postorder iteration functions. 3 adds testing of the iteration to the rbtree runtime tests 4 allows building the rbtree runtime tests as builtins 5 updates zswap. This patch: Add postorder iteration functions for rbtree. These are useful for safely freeing an entire rbtree without modifying the tree at all. Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com> Reviewed-by: NSeth Jennings <sjenning@linux.vnet.ibm.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Michel Lespinasse <walken@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexandre Courbot 提交于
When decompressing into memory, the output buffer length is set to some arbitrarily high value (0x7fffffff) to indicate the output is, virtually, unlimited in size. The problem with this is that some platforms have their physical memory at high physical addresses (0x80000000 or more), and that the output buffer address and its "unlimited" length cannot be added without overflowing. An example of this can be found in inflate_fast(): /* next_out is the output buffer address */ out = strm->next_out - OFF; /* avail_out is the output buffer size. end will overflow if the output * address is >= 0x80000104 */ end = out + (strm->avail_out - 257); This has huge consequences on the performance of kernel decompression, since the following exit condition of inflate_fast() will be always true: } while (in < last && out < end); Indeed, "end" has overflowed and is now always lower than "out". As a result, inflate_fast() will return after processing one single byte of input data, and will thus need to be called an unreasonably high number of times. This probably went unnoticed because kernel decompression is fast enough even with this issue. Nonetheless, adjusting the output buffer length in such a way that the above pointer arithmetic never overflows results in a kernel decompression that is about 3 times faster on affected machines. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Tested-by: NJon Medhurst <tixy@linaro.org> Cc: Stephen Warren <swarren@wwwdotorg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gu Zheng 提交于
[akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Emilio López 提交于
The documentation mentions a "name" parameter, which does not exist. This commit removes such mention from the function documentation. Signed-off-by: NEmilio López <emilio@elopez.com.ar> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joe Perches 提交于
Use the helper function instead of __GFP_ZERO. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joonyoung Shim 提交于
In struct gen_pool_chunk, end_addr means the end address of memory chunk (inclusive), but in the implementation it is treated as address + size of memory chunk (exclusive), so it points to the address plus one instead of correct ending address. The ending address of memory chunk plus one will cause overflow on the memory chunk including the last address of memory map, e.g. when starting address is 0xFFF00000 and size is 0x100000 on 32bit machine, ending address will be 0x100000000. Use correct ending address like starting address + size - 1. [akpm@linux-foundation.org: add comment to struct gen_pool_chunk:end_addr] Signed-off-by: NJoonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 9月, 2013 1 次提交
-
-
由 Kent Overstreet 提交于
Percpu frontend for allocating ids. With percpu allocation (that works), it's impossible to guarantee it will always be possible to allocate all nr_tags - typically, some will be stuck on a remote percpu freelist where the current job can't get to them. We do guarantee that it will always be possible to allocate at least (nr_tags / 2) tags - this is done by keeping track of which and how many cpus have tags on their percpu freelists. On allocation failure if enough cpus have tags that there could potentially be (nr_tags / 2) tags stuck on remote percpu freelists, we then pick a remote cpu at random to steal from. Note that there's no cpu hotplug notifier - we don't care, because steal_tags() will eventually get the down cpu's tags. We _could_ satisfy more allocations if we had a notifier - but we'll still meet our guarantees and it's absolutely not a correctness issue, so I don't think it's worth the extra code. From akpm: "It looks OK to me (that's as close as I get to an ack :)) v6 changes: - Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to make alpha/arc builds happy (Fengguang) - Move second (cpu >= nr_cpu_ids) check inside of first check scope in steal_tags() (akpm + nab) v5 changes: - Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm) - Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm) - Convert steal_tags() to use cpumask_weight() + cpumask_next() + cpumask_first() + cpumask_clear_cpu() (kmo + akpm) - Add comment for alloc_global_tags() (kmo + akpm) - Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm) - Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm) - Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init() (kmo + akpm) - Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy() (kmo + akpm) - Add comment for percpu_ida_alloc @ gfp (kmo + akpm) - Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab) v4 changes: - Fix tags.c reference in percpu_ida_init (akpm) Signed-off-by: NKent Overstreet <kmo@daterainc.com> Cc: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 08 9月, 2013 2 次提交
-
-
由 Linus Torvalds 提交于
The only actual current lockref user (dcache) uses zero reference counts even for perfectly live dentries, because it's a cache: there may not be any users, but that doesn't mean that we want to throw away the dentry. At the same time, the dentry cache does have a notion of a truly "dead" dentry that we must not even increment the reference count of, because we have pruned it and it is not valid. Currently that distinction is not visible in the lockref itself, and the dentry cache validation uses "lockref_get_or_lock()" to either get a new reference to a dentry that already had existing references (and thus cannot be dead), or get the dentry lock so that we can then verify the dentry and increment the reference count under the lock if that verification was successful. That's all somewhat complicated. This adds the concept of being "dead" to the lockref itself, by simply using a count that is negative. This allows a usage scenario where we can increment the refcount of a dentry without having to validate it, and pushing the special "we killed it" case into the lockref code. The dentry code itself doesn't actually use this yet, and it's probably too late in the merge window to do that code (the dentry_kill() code with its "should I decrement the count" logic really is pretty complex code), but let's introduce the concept at the lockref level now. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
The code got rewritten, but the comments got copied as-is from older versions, and as a result the argument name in the comment didn't actually match the code any more. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 9月, 2013 1 次提交
-
-
由 Herbert Xu 提交于
This patch reinstates commits 67822649 39761214 0b95a7f8 31d93962 2d31e518 Now that module softdeps are in the kernel we can use that to resolve the boot issue which cause the revert. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 05 9月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
Frame pointer on ARC doesn't serve the conventional purpose of stack unwinding due to the typical way ABI designates it's usage. Thus it's explicit usage on ARC is discouraged (gcc is free to use it, for some tricky stack frames even if -fomit-frame-pointer). Hence no point enabling it for ARC. References: http://www.spinics.net/lists/kernel/msg1593937.htmlSigned-off-by: NVineet Gupta <vgupta@synopsys.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Paul E. McKenney" <paul.mckenney@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Michel Lespinasse <walken@google.com> Cc: linux-kernel@vger.kernel.org
-
- 04 9月, 2013 2 次提交
-
-
由 Al Viro 提交于
New formats: %p[dD][234]?. The next pointer is interpreted as struct dentry * or struct file * resp. ('d' => dentry, 'D' => file) and the last component(s) of pathname are printed (%pd => just the last one, %pd2 => the last two, etc.) Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Luck, Tony 提交于
While we are likley to succeed and break out of this loop, it isn't guaranteed. We should be power and thread friendly if we do have to go around for a second (or third, or more) attempt. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 9月, 2013 2 次提交
-
-
由 Linus Torvalds 提交于
Instead of taking the spinlock, the lockless versions atomically check that the lock is not taken, and do the reference count update using a cmpxchg() loop. This is semantically identical to doing the reference count update protected by the lock, but avoids the "wait for lock" contention that you get when accesses to the reference count are contended. Note that a "lockref" is absolutely _not_ equivalent to an atomic_t. Even when the lockref reference counts are updated atomically with cmpxchg, the fact that they also verify the state of the spinlock means that the lockless updates can never happen while somebody else holds the spinlock. So while "lockref_put_or_lock()" looks a lot like just another name for "atomic_dec_and_lock()", and both optimize to lockless updates, they are fundamentally different: the decrement done by atomic_dec_and_lock() is truly independent of any lock (as long as it doesn't decrement to zero), so a locked region can still see the count change. The lockref structure, in contrast, really is a *locked* reference count. If you hold the spinlock, the reference count will be stable and you can modify the reference count without using atomics, because even the lockless updates will see and respect the state of the lock. In order to enable the cmpxchg lockless code, the architecture needs to do three things: (1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit in an aligned u64, and have a "cmpxchg()" implementation that works on such a u64 data type. (2) define a helper function to test for a spinlock being unlocked ("arch_spin_value_unlocked()") (3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its Kconfig file. This enables it for x86-64 (but not 32-bit, we'd need to make sure cmpxchg() turns into the proper cmpxchg8b in order to enable it for 32-bit mode). Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
They aren't very good to inline, since they already call external functions (the spinlock code), and we're going to create rather more complicated versions of them that can do the reference count updates locklessly. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 8月, 2013 2 次提交
-
-
由 Eric W. Biederman 提交于
Don't allow mounting sysfs unless the caller has CAP_SYS_ADMIN rights over the net namespace. The principle here is if you create or have capabilities over it you can mount it, otherwise you get to live with what other people have mounted. Instead of testing this with a straight forward ns_capable call, perform this check the long and torturous way with kobject helpers, this keeps direct knowledge of namespaces out of sysfs, and preserves the existing sysfs abstractions. Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 jbaron@akamai.com 提交于
Settings of the form, 'line x module y +p', can fail arbitrarily due to an uninitialized local variable. With this patch results are consistent, as expected. Signed-off-by: NJason Baron <jbaron@akamai.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 8月, 2013 2 次提交
-
-
由 Max Filippov 提交于
-e is a non-standard echo option, echo output is implementation-dependent when it is used. Replace echo -e with printf as suggested by POSIX echo manual. Cc: NeilBrown <neilb@suse.de> Cc: Jim Kukunas <james.t.kukunas@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Yuanhan Liu <yuanhan.liu@linux.intel.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Ken Steele 提交于
This change adds TILE-Gx SIMD instructions to the software raid (md), modeling the Altivec implementation. This is only for Syndrome generation; there is more that could be done to improve recovery, as in the recent Intel SSE3 recovery implementation. The code unrolls 8 times; this turns out to be the best on tilegx hardware among the set 1, 2, 4, 8 or 16. The code reads one cache-line of data from each disk, stores P and Q then goes to the next cache-line. The test code in sys/linux/lib/raid6/test reports 2008 MB/s data read rate for syndrome generation using 18 disks (16 data and 2 parity). It was 1512 MB/s before this SIMD optimizations. This is running on 1 core with all the data in cache. This is based on the paper The Mathematics of RAID-6. (http://kernel.org/pub/linux/kernel/people/hpa/raid6.pdf). Signed-off-by: NKen Steele <ken@tilera.com> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 24 8月, 2013 1 次提交
-
-
由 Richard Laager 提交于
The LZ4 code is listed as using the "BSD 2-Clause License". Signed-off-by: NRichard Laager <rlaager@wiktel.com> Acked-by: NKyungsik Lee <kyungsik.lee@lge.com> Cc: Chanho Min <chanho.min@lge.com> Cc: Richard Yao <ryao@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> [ The 2-clause BSD can be just converted into GPL, but that's rude and pointless, so don't do it - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 8月, 2013 1 次提交
-
-
由 Mike Snitzer 提交于
Commit f7926850 ("math64: New div64_u64_rem helper") implemented div64_u64 in terms of div64_u64_rem. But div64_u64_rem was removed because it slowed down div64_u64 (and there were no other users of div64_u64_rem). Device Mapper's I/O statistics support has a need for div64_u64_rem; reintroduce this helper as a separate method that doesn't slow down div64_u64, especially on 32-bit systems. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 19 8月, 2013 1 次提交
-
-
由 Paul E. McKenney 提交于
In order to better respond to things like duplicate invocations of call_rcu(), RCU needs to see the status of a call to debug_object_activate(). This would allow RCU to leak the callback in order to avoid adding freelist-reuse mischief to the duplicate invoations. This commit therefore makes debug_object_activate() return status, zero for success and -EINVAL for failure. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 15 8月, 2013 1 次提交
-
-
由 Tang Chen 提交于
The comments of find_cpio_data() says: * @offset: When a matching file is found, this is the offset to the * beginning of the cpio. ...... But according to the code, dptr = PTR_ALIGN(p + ch[C_NAMESIZE], 4); nptr = PTR_ALIGN(dptr + ch[C_FILESIZE], 4); .... *offset = (long)nptr - (long)data; /* data is the cpio file */ @offset is the offset of the next file, not the matching file itself. This is confused and may cause unnecessary waste of time to debug. So fix it. As Tejun Heo suggested, rename @offset to @nextoff which is more clear to users. And also adjust the new comments. Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Reviewed-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Reviewed-by: NTejun Heo <tj@kernel.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 09 8月, 2013 1 次提交
-
-
由 EunBong Song 提交于
This patch replace dma_length in "lib/swiotlb.c" to sg_dma_len() macro, because the build error can occur if CONFIG_NEED_SG_DMA_LENGTH is not set, and CONFIG_SWIOTLB is set. Singed-off-by: NEunBong Song <eunb.song@samsung.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 07 8月, 2013 1 次提交
-
-
由 Andi Kleen 提交于
dump_stack is used from assembler code, so make it visible. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1375740170-7446-15-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 26 7月, 2013 1 次提交
-
-
由 Russell King 提交于
Implement debugging for kobject release functions. kobjects are reference counted, so the drop of the last reference to them is not predictable. However, the common case is for the last reference to be the kobject's removal from a subsystem, which results in the release function being immediately called. This can hide subtle bugs, which can occur when another thread holds a reference to the kobject at the same time that a kobject is removed. This results in the release method being delayed. In order to make these kinds of problems more visible, the following patch implements a delayed release; this has the effect that the release function will be out of order with respect to the removal of the kobject in the same manner that it would be if a reference was being held. This provides us with an easy way to allow driver writers to debug their drivers and fix otherwise hidden problems. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 7月, 2013 1 次提交
-
-
由 Herbert Xu 提交于
This reverts commits 67822649 39761214 0b95a7f8 31d93962 2d31e518 Unfortunately this change broke boot on some systems that used an initrd which does not include the newly created crct10dif modules. As these modules are required by sd_mod under certain configurations this is a serious problem. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 20 7月, 2013 1 次提交
-
-
由 Richard Henderson 提交于
Remove the compile warning for __udiv_qrnnd not having a prototype. Use the __builtin_alpha_umulh introduced in gcc 4.0. Reviewed-and-Tested-by: NMatt Turner <mattst88@gmail.com> Signed-off-by: NMatt Turner <mattst88@gmail.com> Signed-off-by: NRichard Henderson <rth@twiddle.net>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the uses of the __cpuinit macros from C files in the core kernel directories (kernel, init, lib, mm, and include) that don't really have a specific maintainer. [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 13 7月, 2013 1 次提交
-
-
由 Oleg Nesterov 提交于
1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE(). Otherwise it is not guaranteed that the first cmpxchg() uses the same value for old_entry and new_last->next. 2. These helpers cache the result of cmpxchg() and read the initial value of head->first before the main loop. I do not think this makes sense. In the likely case cmpxchg() succeeds, otherwise it doesn't hurt to reload head->first. I think it would be better to simplify the code and simply read ->first before cmpxchg(). Signed-off-by: NOleg Nesterov <oleg@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrey Vagin <avagin@openvz.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: David Howells <dhowells@redhat.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 12 7月, 2013 1 次提交
-
-
由 Maarten Lankhorst 提交于
Move the definitions for wound/wait mutexes out to a separate header, ww_mutex.h. This reduces clutter in mutex.h, and increases readability. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Dave Airlie <airlied@gmail.com> Link: http://lkml.kernel.org/r/51D675DC.3000907@canonical.com [ Tidied up the code a bit. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 7月, 2013 1 次提交
-
-
由 Dan Carpenter 提交于
I was reviewing code which I suspected might allocate a zero size SG table. That will cause memory corruption. Also we can't return before doing the memset or we could end up using uninitialized memory in the cleanup path. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Imre Deak <imre.deak@intel.com> Cc: Tejun Heo <tj@kernel.org> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Maxim Levitsky <maximlevitsky@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-