- 03 8月, 2015 1 次提交
-
-
由 Jason Baron 提交于
Signed-off-by: NJason Baron <jbaron@akamai.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: benh@kernel.crashing.org Cc: bp@alien8.de Cc: davem@davemloft.net Cc: ddaney@caviumnetworks.com Cc: heiko.carstens@de.ibm.com Cc: linux-kernel@vger.kernel.org Cc: liuj97@gmail.com Cc: luto@amacapital.net Cc: michael@ellerman.id.au Cc: rabin@rab.in Cc: ralf@linux-mips.org Cc: rostedt@goodmis.org Cc: shuahkh@osg.samsung.com Cc: vbabka@suse.cz Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/0c091ecebd78a879ed8a71835d205a691a75ab4e.1438227999.git.jbaron@akamai.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 20 7月, 2015 2 次提交
-
-
由 Davidlohr Bueso 提交于
No one uses this anymore, and this is not the first time the idea of replacing it with a (now possible) userspace side. Lock stealing logic was removed long ago in when the lock was granted to the highest prio. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Darren Hart <dvhart@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Mike Galbraith <umgwanakikbuti@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1435782588-4177-2-git-send-email-dave@stgolabs.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Davidlohr Bueso 提交于
Although futexes are well known for being a royal pita, we really have very little debugging capabilities - except for relying on tglx's eye half the time. By simply making use of the existing fault-injection machinery, we can improve this situation, allowing generating artificial uaddress faults and deadlock scenarios. Of course, when this is disabled in production systems, the overhead for failure checks is practically zero -- so this is very cheap at the same time. Future work would be nice to now enhance trinity to make use of this. There is a special tunable 'ignore-private', which can filter out private futexes. Given the tsk->make_it_fail filter and this option, pi futexes can be narrowed down pretty closely. Signed-off-by: NDavidlohr Bueso <dbueso@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Darren Hart <darren@dvhart.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Link: http://lkml.kernel.org/r/1435645562-975-3-git-send-email-dave@stgolabs.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 18 7月, 2015 4 次提交
-
-
由 Aneesh Kumar K.V 提交于
Without this we end up using the previous name of the compressor in the loop in unpack_rootfs. For example we get errors like "compression method gzip not configured" even when we have CONFIG_DECOMPRESS_GZIP enabled. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Haggai Eran 提交于
If dma-debug is disabled due to a memory error, DMA unmaps do not affect the dma_active_cacheline radix tree anymore, and debug_dma_assert_idle() can print false warnings. Disable debug_dma_assert_idle() when dma_debug_disabled() is true. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Fixes: 0abdd7a8 ("dma-debug: introduce debug_dma_assert_idle()") Cc: Dan Williams <dan.j.williams@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Horia Geanta <horia.geanta@freescale.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
A hexdump with a buf not aligned to the groupsize causes non-naturally-aligned memory accesses. This was causing a kernel panic on the processor BlackFin BF527, when such an unaligned buffer was fed by the function ubifs_scanned_corruption in fs/ubifs/scan.c . To fix this, change accesses to the contents of the buffer so they go through get_unaligned(). This change should be harmless to unaligned- access-capable architectures, and any performance hit should be anyway dwarfed by the snprintf() processing time. Signed-off-by: NHoracio Mijail Antón Quiles <hmijail@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: David Howells <dhowells@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Joe Perches <joe@perches.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicolas Iooss 提交于
Using __printf attributes helps to detect several format string issues at compile time (even though -Wformat-security is currently disabled in Makefile). For example it can detect when formatting a pointer as a number, like the issue fixed in commit a3fa71c4 ("wl18xx: show rx_frames_per_rates as an array as it really is"), or when the arguments do not match the format string, c.f. for example commit 5ce1aca8 ("reiserfs: fix __RASSERT format string"). To prevent similar bugs in the future, add a __printf attribute to every function prototype which needs one in include/linux/ and lib/. These functions were mostly found by using gcc's -Wsuggest-attribute=format flag. Signed-off-by: NNicolas Iooss <nicolas.iooss_linux@m4x.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Felipe Balbi <balbi@ti.com> Cc: Joel Becker <jlbec@evilplan.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 7月, 2015 1 次提交
-
-
由 Phil Sutter 提交于
If rhashtable_walk_next detects a resize operation in progress, it jumps to the new table and continues walking that one. But it misses to drop the reference to it's current item, leading it to continue traversing the new table's bucket in which the current item is sorted into, and after reaching that bucket's end continues traversing the new table's second bucket instead of the first one, thereby potentially missing items. This fixes the rhashtable runtime test for me. Bug probably introduced by Herbert Xu's patch eddee5ba ("rhashtable: Fix walker behaviour during rehash") although not explicitly tested. Fixes: eddee5ba ("rhashtable: Fix walker behaviour during rehash") Signed-off-by: NPhil Sutter <phil@nwl.cc> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 7月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
KASAN_SHADOW_OFFSET is purely arch specific setting, so it should be in arch's Kconfig file. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Alexander Popov <alpopov@ptsecurity.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Bolle <pebolle@tiscali.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1435828178-10975-7-git-send-email-a.ryabinin@samsung.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 04 7月, 2015 1 次提交
-
-
由 Naveen N. Rao 提交于
Both CONFIG_SCHEDSTATS=y and CONFIG_TASK_DELAY_ACCT=y track task sched_info, which results in ugly #if clauses. Simplify the code by introducing a synthethic CONFIG_SCHED_INFO switch, selected by both. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: a.p.zijlstra@chello.nl Cc: ricklind@us.ibm.com Link: http://lkml.kernel.org/r/8d19eef800811a94b0f91bcbeb27430a884d7433.1435255405.git.naveen.n.rao@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 7月, 2015 5 次提交
-
-
由 Vladimir Zapolskiy 提交于
To be consistent with other kernel interface namings, rename of_get_named_gen_pool() to of_gen_pool_get(). In the original function name "_named" suffix references to a device tree property, which contains a phandle to a device and the corresponding device driver is assumed to register a gen_pool object. Due to a weak relation and to avoid any confusion (e.g. in future possible scenario if gen_pool objects are named) the suffix is removed. [sfr@canb.auug.org.au: crypto/marvell/cesa - fix up for of_get_named_gen_pool() rename] Signed-off-by: NVladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Philipp Zabel <p.zabel@pengutronix.de> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Jaroslav Kysela <perex@perex.cz> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Boris BREZILLON <boris.brezillon@free-electrons.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vladimir Zapolskiy 提交于
To be consistent with other genalloc interface namings, rename dev_get_gen_pool() to gen_pool_get(). The original omitted "dev_" prefix is removed, since it points to argument type of the function, and so it does not bring any useful information. [akpm@linux-foundation.org: update arch/arm/mach-socfpga/pm.c] Signed-off-by: NVladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com> Cc: Philipp Zabel <p.zabel@pengutronix.de> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Mark Brown <broonie@kernel.org> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Alan Tull <atull@opensource.altera.com> Cc: Dinh Nguyen <dinguyen@opensource.altera.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Gordon 提交于
do_device_access() takes a separate parameter to indicate the direction of data transfer, which it used to use to select the appropriate function out of sg_pcopy_{to,from}_buffer(). However these two functions now have So this patch makes it bypass these wrappers and call the underlying function sg_copy_buffer() directly; this has the same calling style as do_device_access() i.e. a separate direction-of-transfer parameter and no pointers-to-const, so skipping the wrappers not only eliminates the warning, it also make the code simpler :) [akpm@linux-foundation.org: fix very broken build] Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Gordon 提交于
The 'buf' parameter of sg(p)copy_from_buffer() can and should be const-qualified, although because of the shared implementation of _to_buffer() and _from_buffer(), we have to cast this away internally. This means that callers who have a 'const' buffer containing the data to be copied to the sg-list no longer have to cast away the const-ness themselves. It also enables improved coverage by code analysis tools. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Gordon 提交于
The kerneldoc for the functions doesn't match the code; the last two parameters (buflen, skip) have been transposed, which is confusing, especially as they're both integral types and the compiler won't warn about swapping them. These functions and the kerneldoc were introduced in commit: df642cea lib/scatterlist: introduce sg_pcopy_from_buffer() ... Author: Akinobu Mita <akinobu.mita@gmail.com> Date: Mon Jul 8 16:01:54 2013 -0700 The only difference between sg_pcopy_{from,to}_buffer() and sg_copy_{from,to}_buffer() is an additional argument that specifies the number of bytes to skip the SG list before copying. The functions have the extra argument at the end, but the kerneldoc lists it in penultimate position. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 6月, 2015 8 次提交
-
-
由 Ross Zwisler 提交于
Based on an original patch by Ross Zwisler [1]. Writes to persistent memory have the potential to be posted to cpu cache, cpu write buffers, and platform write buffers (memory controller) before being committed to persistent media. Provide apis, memcpy_to_pmem(), wmb_pmem(), and memremap_pmem(), to write data to pmem and assert that it is durable in PMEM (a persistent linear address range). A '__pmem' attribute is added so sparse can track proper usage of pointers to pmem. This continues the status quo of pmem being x86 only for 4.2, but reworks to ioremap, and wider implementation of memremap() will enable other archs in 4.3. [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-May/000932.html Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com> [djbw: various reworks] Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Rasmus Villemoes 提交于
There's probably not many slashes in the name, but starting over when we see one feels wrong. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Rasmus Villemoes 提交于
Strings are sometimes sanitized by replacing a certain character (often '/') by another (often '!'). In a few places, this is done the same way Schlemiel the Painter would do it. Others are slightly smarter but still do multiple strchr() calls. Introduce strreplace() to do this using a single function call and a single pass over the string. One would expect the return value to be one of three things: void, s, or the number of replacements made. I chose the fourth, returning a pointer to the end of the string. This is more likely to be useful (for example allowing the caller to avoid a strlen call). Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Neil Brown <neilb@suse.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Joe Perches <joe@perches.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
Currently we use per-cpu array to hold pointers to preallocated nodes. Let's replace it with linked list. On x86_64 it saves 256 bytes in per-cpu ELF section which may translate into freeing up 2MB of memory for NR_CPUS==8192. [akpm@linux-foundation.org: fix comment, coding style] Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sudeep Holla 提交于
bitmap_print_to_pagebuf uses scnprintf to copy the cpumask/list to page buffer. It handles the newline and trailing null character explicitly. It's unnecessary and also partially duplicated as scnprintf already adds trailing null character. The newline can be passed through format string to scnprintf. This patch does that simplification. However theoretically there's one behavior difference: when the buffer is too small, the original code would still output '\n' at the end while the new code(with this patch) would just continue to print the formatted string. Since this function is dealing with only page buffers, it's highly unlikely to hit that corner case. This patch will help in auditing the users of bitmap_print_to_pagebuf to verify that the buffer passed is large enough and get rid of it completely by replacing them with direct scnprintf() [akpm@linux-foundation.org: tweak comment] Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Suggested-by: NPawel Moll <Pawel.Moll@arm.com> Cc: Tejun Heo <tj@kernel.org> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Daniel Wagner 提交于
In case the call side is not providing a swap function, we either use a 32 bit or a generic swap function. When swapping around pointers on 64 bit architectures falling back to use the generic swap function seems like an unnecessary waste. There at least 9 users ('sort' is of difficult to grep for) of sort() and all of them use the sort function without a customized swap function. Furthermore, they are all using pointers to swap around: arch/x86/kernel/e820.c:sanitize_e820_map() arch/x86/mm/extable.c:sort_extable() drivers/acpi/fan.c:acpi_fan_get_fps() fs/btrfs/super.c:btrfs_descending_sort_devices() fs/xfs/libxfs/xfs_dir2_block.c:xfs_dir2_sf_to_block() kernel/range.c:clean_sort_range() mm/memcontrol.c:__mem_cgroup_usage_register_event() sound/pci/hda/hda_auto_parser.c:snd_hda_parse_pin_defcfg() sound/pci/hda/hda_auto_parser.c:sort_pins_by_sequence() Obviously, we could improve the swap for other sizes as well but this is overkill at this point. A simple test shows sorting a 400 element array (try to stay in one page) with either with u32_swap() or u64_swap() show that the theory actually works. This test was done on a x86_64 (Intel Xeon E5-4610) machine. - swap_32: NumSamples = 100; Min = 48.00; Max = 49.00 Mean = 48.320000; Variance = 0.217600; SD = 0.466476; Median 48.000000 each * represents a count of 1 48.0000 - 48.1000 [ 68]: ******************************************************************** 48.1000 - 48.2000 [ 0]: 48.2000 - 48.3000 [ 0]: 48.3000 - 48.4000 [ 0]: 48.4000 - 48.5000 [ 0]: 48.5000 - 48.6000 [ 0]: 48.6000 - 48.7000 [ 0]: 48.7000 - 48.8000 [ 0]: 48.8000 - 48.9000 [ 0]: 48.9000 - 49.0000 [ 32]: ******************************** - swap_64: NumSamples = 100; Min = 44.00; Max = 63.00 Mean = 48.250000; Variance = 18.687500; SD = 4.322904; Median 47.000000 each * represents a count of 1 44.0000 - 45.9000 [ 15]: *************** 45.9000 - 47.8000 [ 37]: ************************************* 47.8000 - 49.7000 [ 39]: *************************************** 49.7000 - 51.6000 [ 0]: 51.6000 - 53.5000 [ 0]: 53.5000 - 55.4000 [ 0]: 55.4000 - 57.3000 [ 0]: 57.3000 - 59.2000 [ 1]: * 59.2000 - 61.1000 [ 3]: *** 61.1000 - 63.0000 [ 5]: ***** - swap_72: NumSamples = 100; Min = 53.00; Max = 71.00 Mean = 55.070000; Variance = 21.565100; SD = 4.643824; Median 53.000000 each * represents a count of 1 53.0000 - 54.8000 [ 73]: ************************************************************************* 54.8000 - 56.6000 [ 9]: ********* 56.6000 - 58.4000 [ 9]: ********* 58.4000 - 60.2000 [ 0]: 60.2000 - 62.0000 [ 0]: 62.0000 - 63.8000 [ 0]: 63.8000 - 65.6000 [ 0]: 65.6000 - 67.4000 [ 1]: * 67.4000 - 69.2000 [ 4]: **** 69.2000 - 71.0000 [ 4]: **** - test program: static int cmp_32(const void *a, const void *b) { u32 l = *(u32 *)a; u32 r = *(u32 *)b; if (l < r) return -1; if (l > r) return 1; return 0; } static int cmp_64(const void *a, const void *b) { u64 l = *(u64 *)a; u64 r = *(u64 *)b; if (l < r) return -1; if (l > r) return 1; return 0; } static int cmp_72(const void *a, const void *b) { u32 l = get_unaligned((u32 *) a); u32 r = get_unaligned((u32 *) b); if (l < r) return -1; if (l > r) return 1; return 0; } static void init_array32(void *array) { u32 *a = array; int i; a[0] = 3821; for (i = 1; i < ARRAY_ELEMENTS; i++) a[i] = next_pseudo_random32(a[i-1]); } static void init_array64(void *array) { u64 *a = array; int i; a[0] = 3821; for (i = 1; i < ARRAY_ELEMENTS; i++) a[i] = next_pseudo_random32(a[i-1]); } static void init_array72(void *array) { char *p; u32 v; int i; v = 3821; for (i = 0; i < ARRAY_ELEMENTS; i++) { p = (char *)array + (i * 9); put_unaligned(v, (u32*) p); v = next_pseudo_random32(v); } } static void sort_test(void (*init)(void *array), int (*cmp) (const void *, const void *), void *array, size_t size) { ktime_t start, stop; int i; for (i = 0; i < 10000; i++) { init(array); local_irq_disable(); start = ktime_get(); sort(array, ARRAY_ELEMENTS, size, cmp, NULL); stop = ktime_get(); local_irq_enable(); if (i > 10000 - 101) pr_info("%lld\n", ktime_to_us(ktime_sub(stop, start))); } } static void *create_array(size_t size) { void *array; array = kmalloc(ARRAY_ELEMENTS * size, GFP_KERNEL); if (!array) return NULL; return array; } static int perform_test(size_t size) { void *array; array = create_array(size); if (!array) return -ENOMEM; pr_info("test element size %d bytes\n", (int)size); switch (size) { case 4: sort_test(init_array32, cmp_32, array, size); break; case 8: sort_test(init_array64, cmp_64, array, size); break; case 9: sort_test(init_array72, cmp_72, array, size); break; } kfree(array); return 0; } static int __init sort_tests_init(void) { int err; err = perform_test(sizeof(u32)); if (err) return err; err = perform_test(sizeof(u64)); if (err) return err; err = perform_test(sizeof(u64)+1); if (err) return err; return 0; } static void __exit sort_tests_exit(void) { } module_init(sort_tests_init); module_exit(sort_tests_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Daniel Wagner"); MODULE_DESCRIPTION("sort perfomance tests"); Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Geert Uytterhoeven 提交于
The test data arrays, containing pointers to test strings, are never modified, so they can be const, too. Hence mark them "const" and "__initconst". This moves 28 pointers from ".init.data" to ".init.rodata". Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
bitmap_parselist("", &mask, nmaskbits) will erroneously set bit zero in the mask. The same bug is visible in cpumask_parselist() since it is layered on top of the bitmask code, e.g. if you boot with "isolcpus=", you will actually end up with cpu zero isolated. The bug was introduced in commit 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") when bitmap_parselist() was generalized to support userspace as well as kernelspace. Fixes: 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2015 2 次提交
-
-
由 Anand Jain 提交于
drivers/cpufreq/cpufreq.c is already using this function. And now btrfs needs it as well. Export symbol kobject_move(). Signed-off-by: NAnand Jain <anand.jain@oracle.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NDavid Sterba <dsterba@suse.cz>
-
由 Andrew Morton 提交于
Revert commit 534b483a ("cpumask: don't perform while loop in cpumask_next_and()"). This was a minor optimization, but it puts a `struct cpumask' on the stack, which consumes too much stack space. Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: NPeter Zijlstra <peterz@infradead.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Amir Vadai <amirv@mellanox.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 6月, 2015 1 次提交
-
-
由 Paul Gortmaker 提交于
This was using module_init, but there is no way this code can be modular. In the non-modular case, a module_init becomes a device_initcall, but this really isn't a device. So we should choose a more appropriate initcall bucket to put it in. Assuming boot time self tests need to be observed over a console to be useful, and that the console device could possibly not be fully functional until after device_initcall, we move this to the late_initcall bucket, which is immediately after device_initcall. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 16 6月, 2015 1 次提交
-
-
由 Tadeusz Struk 提交于
Added a mpi_read_buf() helper function to export MPI to a buf provided by the user, and a mpi_get_size() helper, that tells the user how big the buf is. Changed mpi_free to use kzfree instead of kfree because it is used to free crypto keys. Signed-off-by: NTadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 13 6月, 2015 1 次提交
-
-
由 Jaedon Shin 提交于
This patch fixes mips compilation error: lib/mpi/generic_mpih-mul1.c: In function 'mpihelp_mul_1': lib/mpi/longlong.h:651:2: error: impossible constraint in 'asm' Signed-off-by: NJaedon Shin <jaedon.shin@gmail.com> Cc: Linux-MIPS <linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/10546/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 11 6月, 2015 3 次提交
-
-
由 Rasmus Villemoes 提交于
With CONFIG_DEBUG_INFO_REDUCED, we do get quite a lot of debug info (around 22.7 MB for a defconfig+DEBUG_INFO_REDUCED). However, the "basenames must match" rule used by -femit-struct-debug-baseonly option means that we miss some core data structures, such as struct {device, file, inode, mm_struct, page} etc. We can easily get these included as well, while still getting the benefits of CONFIG_DEBUG_INFO_REDUCED (faster build times and smaller individual object files): All it takes is a dummy translation unit including a few strategic headers and compiled with a flag overriding -femit-struct-debug-baseonly. This increases the size of .debug_info by ~0.3%, but these 90 KB contain some rather useful info. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NMichal Marek <mmarek@suse.cz>
-
由 Anton Blanchard 提交于
The -mabi=altivec option is not recognised on LLVM, so use call cc-option to check for support. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Joerg Roedel 提交于
Print a warning when all allocation tries have been failed and the function is about to return NULL. This prepares for calling the function with __GFP_NOWARN to suppress allocation failure warnings before all fall-backs have failed - which we'll do to improve kdump behavior. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NBaoquan He <bhe@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Young <dyoung@redhat.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jörg Rödel <joro@8bytes.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: kexec@lists.infradead.org Link: http://lkml.kernel.org/r/1433500202-25531-2-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 6月, 2015 1 次提交
-
-
由 Hauke Mehrtens 提交于
rhashtable uses EXPORT_SYMBOL_GPL() without importing linux/export.h directly it is only imported indirectly through some other includes. Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 6月, 2015 1 次提交
-
-
由 Alexandre Courbot 提交于
The map_single() function is not defined as static, even though it doesn't seem to be used anywhere else in the kernel. Make it static to avoid namespace pollution since this is a rather generic symbol. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 03 6月, 2015 3 次提交
-
-
由 Jan Kara 提交于
strnlen_user() can return a number in a range 0 to count + sizeof(unsigned long) - 1. Clarify the comment at the top of the function so that users don't think the function returns at most count+1. Signed-off-by: NJan Kara <jack@suse.cz> [ Also added commentary about preferably not using this function ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tom Lendacky 提交于
When performing a dma_map_sg() call, the number of sg entries to map is required. Using sg_nents to retrieve the number of sg entries will return the total number of entries in the sg list up to the entry marked as the end. If there happen to be unused entries in the list, these will still be counted. Some dma_map_sg() implementations will not handle the unused entries correctly (lib/swiotlb.c) and execute a BUG_ON. The sg_nents_for_len() function will traverse the sg list and return the number of entries required to satisfy the supplied length argument. This can then be supplied to the dma_map_sg() call to successfully map the sg. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jan Kara 提交于
If the specified maximum length of the string is a multiple of unsigned long, we would load one long behind the specified maximum. If that happens to be in a next page, we can hit a page fault although we were not expected to. Fix the off-by-one bug in the test whether we are at the end of the specified range. Signed-off-by: NJan Kara <jack@suse.cz> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 5月, 2015 1 次提交
-
-
由 Akinobu Mita 提交于
This introduces crc_t10dif_update() which enables to calculate CRC for a block which straddles multiple SG elements by calling multiple times. This also converts crc_t10dif() to use crc_t10dif_update() as they are almost same. (remove extra function call in crc_t10dif() and crc_t10dif_update - Tim + Herbert) Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Acked-by: NMartin K. Petersen <martin.petersen@oracle.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Cc: Nicholas Bellinger <nab@linux-iscsi.org> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: target-devel@vger.kernel.org Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 29 5月, 2015 1 次提交
-
-
由 Dave Chinner 提交于
XFS uses non-stanard batch sizes for avoiding frequent global counter updates on it's allocated inode counters, as they increment or decrement in batches of 64 inodes. Hence the standard percpu counter batch of 32 means that the counter is effectively a global counter. Currently Xfs uses a batch size of 128 so that it doesn't take the global lock on every single modification. However, Xfs also needs to compare accurately against zero, which means we need to use percpu_counter_compare(), and that has a hard-coded batch size of 32, and hence will spuriously fail to detect when it is supposed to use precise comparisons and hence the accounting goes wrong. Add __percpu_counter_compare() to take a custom batch size so we can use it sanely in XFS and factor percpu_counter_compare() to use it. Signed-off-by: NDave Chinner <dchinner@redhat.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NDave Chinner <david@fromorbit.com>
-
- 28 5月, 2015 2 次提交
-
-
由 Peter Zijlstra 提交于
Change the insert and erase code such that lockless searches are non-fatal. In and of itself an rbtree cannot be correctly searched while in-modification, we can however provide weaker guarantees that will allow the rbtree to be used in conjunction with other techniques, such as latches; see 9b0fd802 ("seqcount: Add raw_write_seqcount_latch()"). For this to work we need the following guarantees from the rbtree code: 1) a lockless reader must not see partial stores, this would allow it to observe nodes that are invalid memory. 2) there must not be (temporary) loops in the tree structure in the modifier's program order, this would cause a lookup which interrupts the modifier to get stuck indefinitely. For 1) we must use WRITE_ONCE() for all updates to the tree structure; in particular this patch only does rb_{left,right} as those are the only element required for simple searches. It generates slightly worse code, probably because volatile. But in pointer chasing heavy code a few instructions more should not matter. For 2) I have carefully audited the code and drawn every intermediate link state and not found a loop. Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Cc: Rik van Riel <riel@redhat.com> Reviewed-by: NMichel Lespinasse <walken@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Peter Zijlstra 提交于
Currently the RCU usage in module is an inconsistent mess of RCU and RCU-sched, this is broken for CONFIG_PREEMPT where synchronize_rcu() does not imply synchronize_sched(). Most usage sites use preempt_{dis,en}able() which is RCU-sched, but (most of) the modification sites use synchronize_rcu(). With the exception of the module bug list, which actually uses RCU. Convert everything over to RCU-sched. Furthermore add lockdep asserts to all sites, because it's not at all clear to me the required locking is observed, esp. on exported functions. Cc: Rusty Russell <rusty@rustcorp.com.au> Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-