- 26 6月, 2015 5 次提交
-
-
由 Kirill A. Shutemov 提交于
Currently we use per-cpu array to hold pointers to preallocated nodes. Let's replace it with linked list. On x86_64 it saves 256 bytes in per-cpu ELF section which may translate into freeing up 2MB of memory for NR_CPUS==8192. [akpm@linux-foundation.org: fix comment, coding style] Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sudeep Holla 提交于
bitmap_print_to_pagebuf uses scnprintf to copy the cpumask/list to page buffer. It handles the newline and trailing null character explicitly. It's unnecessary and also partially duplicated as scnprintf already adds trailing null character. The newline can be passed through format string to scnprintf. This patch does that simplification. However theoretically there's one behavior difference: when the buffer is too small, the original code would still output '\n' at the end while the new code(with this patch) would just continue to print the formatted string. Since this function is dealing with only page buffers, it's highly unlikely to hit that corner case. This patch will help in auditing the users of bitmap_print_to_pagebuf to verify that the buffer passed is large enough and get rid of it completely by replacing them with direct scnprintf() [akpm@linux-foundation.org: tweak comment] Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Suggested-by: NPawel Moll <Pawel.Moll@arm.com> Cc: Tejun Heo <tj@kernel.org> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Daniel Wagner 提交于
In case the call side is not providing a swap function, we either use a 32 bit or a generic swap function. When swapping around pointers on 64 bit architectures falling back to use the generic swap function seems like an unnecessary waste. There at least 9 users ('sort' is of difficult to grep for) of sort() and all of them use the sort function without a customized swap function. Furthermore, they are all using pointers to swap around: arch/x86/kernel/e820.c:sanitize_e820_map() arch/x86/mm/extable.c:sort_extable() drivers/acpi/fan.c:acpi_fan_get_fps() fs/btrfs/super.c:btrfs_descending_sort_devices() fs/xfs/libxfs/xfs_dir2_block.c:xfs_dir2_sf_to_block() kernel/range.c:clean_sort_range() mm/memcontrol.c:__mem_cgroup_usage_register_event() sound/pci/hda/hda_auto_parser.c:snd_hda_parse_pin_defcfg() sound/pci/hda/hda_auto_parser.c:sort_pins_by_sequence() Obviously, we could improve the swap for other sizes as well but this is overkill at this point. A simple test shows sorting a 400 element array (try to stay in one page) with either with u32_swap() or u64_swap() show that the theory actually works. This test was done on a x86_64 (Intel Xeon E5-4610) machine. - swap_32: NumSamples = 100; Min = 48.00; Max = 49.00 Mean = 48.320000; Variance = 0.217600; SD = 0.466476; Median 48.000000 each * represents a count of 1 48.0000 - 48.1000 [ 68]: ******************************************************************** 48.1000 - 48.2000 [ 0]: 48.2000 - 48.3000 [ 0]: 48.3000 - 48.4000 [ 0]: 48.4000 - 48.5000 [ 0]: 48.5000 - 48.6000 [ 0]: 48.6000 - 48.7000 [ 0]: 48.7000 - 48.8000 [ 0]: 48.8000 - 48.9000 [ 0]: 48.9000 - 49.0000 [ 32]: ******************************** - swap_64: NumSamples = 100; Min = 44.00; Max = 63.00 Mean = 48.250000; Variance = 18.687500; SD = 4.322904; Median 47.000000 each * represents a count of 1 44.0000 - 45.9000 [ 15]: *************** 45.9000 - 47.8000 [ 37]: ************************************* 47.8000 - 49.7000 [ 39]: *************************************** 49.7000 - 51.6000 [ 0]: 51.6000 - 53.5000 [ 0]: 53.5000 - 55.4000 [ 0]: 55.4000 - 57.3000 [ 0]: 57.3000 - 59.2000 [ 1]: * 59.2000 - 61.1000 [ 3]: *** 61.1000 - 63.0000 [ 5]: ***** - swap_72: NumSamples = 100; Min = 53.00; Max = 71.00 Mean = 55.070000; Variance = 21.565100; SD = 4.643824; Median 53.000000 each * represents a count of 1 53.0000 - 54.8000 [ 73]: ************************************************************************* 54.8000 - 56.6000 [ 9]: ********* 56.6000 - 58.4000 [ 9]: ********* 58.4000 - 60.2000 [ 0]: 60.2000 - 62.0000 [ 0]: 62.0000 - 63.8000 [ 0]: 63.8000 - 65.6000 [ 0]: 65.6000 - 67.4000 [ 1]: * 67.4000 - 69.2000 [ 4]: **** 69.2000 - 71.0000 [ 4]: **** - test program: static int cmp_32(const void *a, const void *b) { u32 l = *(u32 *)a; u32 r = *(u32 *)b; if (l < r) return -1; if (l > r) return 1; return 0; } static int cmp_64(const void *a, const void *b) { u64 l = *(u64 *)a; u64 r = *(u64 *)b; if (l < r) return -1; if (l > r) return 1; return 0; } static int cmp_72(const void *a, const void *b) { u32 l = get_unaligned((u32 *) a); u32 r = get_unaligned((u32 *) b); if (l < r) return -1; if (l > r) return 1; return 0; } static void init_array32(void *array) { u32 *a = array; int i; a[0] = 3821; for (i = 1; i < ARRAY_ELEMENTS; i++) a[i] = next_pseudo_random32(a[i-1]); } static void init_array64(void *array) { u64 *a = array; int i; a[0] = 3821; for (i = 1; i < ARRAY_ELEMENTS; i++) a[i] = next_pseudo_random32(a[i-1]); } static void init_array72(void *array) { char *p; u32 v; int i; v = 3821; for (i = 0; i < ARRAY_ELEMENTS; i++) { p = (char *)array + (i * 9); put_unaligned(v, (u32*) p); v = next_pseudo_random32(v); } } static void sort_test(void (*init)(void *array), int (*cmp) (const void *, const void *), void *array, size_t size) { ktime_t start, stop; int i; for (i = 0; i < 10000; i++) { init(array); local_irq_disable(); start = ktime_get(); sort(array, ARRAY_ELEMENTS, size, cmp, NULL); stop = ktime_get(); local_irq_enable(); if (i > 10000 - 101) pr_info("%lld\n", ktime_to_us(ktime_sub(stop, start))); } } static void *create_array(size_t size) { void *array; array = kmalloc(ARRAY_ELEMENTS * size, GFP_KERNEL); if (!array) return NULL; return array; } static int perform_test(size_t size) { void *array; array = create_array(size); if (!array) return -ENOMEM; pr_info("test element size %d bytes\n", (int)size); switch (size) { case 4: sort_test(init_array32, cmp_32, array, size); break; case 8: sort_test(init_array64, cmp_64, array, size); break; case 9: sort_test(init_array72, cmp_72, array, size); break; } kfree(array); return 0; } static int __init sort_tests_init(void) { int err; err = perform_test(sizeof(u32)); if (err) return err; err = perform_test(sizeof(u64)); if (err) return err; err = perform_test(sizeof(u64)+1); if (err) return err; return 0; } static void __exit sort_tests_exit(void) { } module_init(sort_tests_init); module_exit(sort_tests_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Daniel Wagner"); MODULE_DESCRIPTION("sort perfomance tests"); Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Geert Uytterhoeven 提交于
The test data arrays, containing pointers to test strings, are never modified, so they can be const, too. Hence mark them "const" and "__initconst". This moves 28 pointers from ".init.data" to ".init.rodata". Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
bitmap_parselist("", &mask, nmaskbits) will erroneously set bit zero in the mask. The same bug is visible in cpumask_parselist() since it is layered on top of the bitmask code, e.g. if you boot with "isolcpus=", you will actually end up with cpu zero isolated. The bug was introduced in commit 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") when bitmap_parselist() was generalized to support userspace as well as kernelspace. Fixes: 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2015 1 次提交
-
-
由 Andrew Morton 提交于
Revert commit 534b483a ("cpumask: don't perform while loop in cpumask_next_and()"). This was a minor optimization, but it puts a `struct cpumask' on the stack, which consumes too much stack space. Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: NPeter Zijlstra <peterz@infradead.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Amir Vadai <amirv@mellanox.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 6月, 2015 1 次提交
-
-
由 Tadeusz Struk 提交于
Added a mpi_read_buf() helper function to export MPI to a buf provided by the user, and a mpi_get_size() helper, that tells the user how big the buf is. Changed mpi_free to use kzfree instead of kfree because it is used to free crypto keys. Signed-off-by: NTadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 13 6月, 2015 1 次提交
-
-
由 Jaedon Shin 提交于
This patch fixes mips compilation error: lib/mpi/generic_mpih-mul1.c: In function 'mpihelp_mul_1': lib/mpi/longlong.h:651:2: error: impossible constraint in 'asm' Signed-off-by: NJaedon Shin <jaedon.shin@gmail.com> Cc: Linux-MIPS <linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/10546/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 11 6月, 2015 2 次提交
-
-
由 Anton Blanchard 提交于
The -mabi=altivec option is not recognised on LLVM, so use call cc-option to check for support. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Joerg Roedel 提交于
Print a warning when all allocation tries have been failed and the function is about to return NULL. This prepares for calling the function with __GFP_NOWARN to suppress allocation failure warnings before all fall-backs have failed - which we'll do to improve kdump behavior. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NBaoquan He <bhe@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Young <dyoung@redhat.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jörg Rödel <joro@8bytes.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: kexec@lists.infradead.org Link: http://lkml.kernel.org/r/1433500202-25531-2-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 6月, 2015 1 次提交
-
-
由 Hauke Mehrtens 提交于
rhashtable uses EXPORT_SYMBOL_GPL() without importing linux/export.h directly it is only imported indirectly through some other includes. Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 6月, 2015 1 次提交
-
-
由 Alexandre Courbot 提交于
The map_single() function is not defined as static, even though it doesn't seem to be used anywhere else in the kernel. Make it static to avoid namespace pollution since this is a rather generic symbol. Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 03 6月, 2015 3 次提交
-
-
由 Jan Kara 提交于
strnlen_user() can return a number in a range 0 to count + sizeof(unsigned long) - 1. Clarify the comment at the top of the function so that users don't think the function returns at most count+1. Signed-off-by: NJan Kara <jack@suse.cz> [ Also added commentary about preferably not using this function ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tom Lendacky 提交于
When performing a dma_map_sg() call, the number of sg entries to map is required. Using sg_nents to retrieve the number of sg entries will return the total number of entries in the sg list up to the entry marked as the end. If there happen to be unused entries in the list, these will still be counted. Some dma_map_sg() implementations will not handle the unused entries correctly (lib/swiotlb.c) and execute a BUG_ON. The sg_nents_for_len() function will traverse the sg list and return the number of entries required to satisfy the supplied length argument. This can then be supplied to the dma_map_sg() call to successfully map the sg. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jan Kara 提交于
If the specified maximum length of the string is a multiple of unsigned long, we would load one long behind the specified maximum. If that happens to be in a next page, we can hit a page fault although we were not expected to. Fix the off-by-one bug in the test whether we are at the end of the specified range. Signed-off-by: NJan Kara <jack@suse.cz> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 5月, 2015 1 次提交
-
-
由 Dave Chinner 提交于
XFS uses non-stanard batch sizes for avoiding frequent global counter updates on it's allocated inode counters, as they increment or decrement in batches of 64 inodes. Hence the standard percpu counter batch of 32 means that the counter is effectively a global counter. Currently Xfs uses a batch size of 128 so that it doesn't take the global lock on every single modification. However, Xfs also needs to compare accurately against zero, which means we need to use percpu_counter_compare(), and that has a hard-coded batch size of 32, and hence will spuriously fail to detect when it is supposed to use precise comparisons and hence the accounting goes wrong. Add __percpu_counter_compare() to take a custom batch size so we can use it sanely in XFS and factor percpu_counter_compare() to use it. Signed-off-by: NDave Chinner <dchinner@redhat.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NDave Chinner <david@fromorbit.com>
-
- 28 5月, 2015 5 次提交
-
-
由 Rusty Russell 提交于
da91309e (cpumask: Utility function to set n'th cpu...) created a genuinely weird function. I never saw it before, it went through DaveM. (He only does this to make us other maintainers feel better about our own mistakes.) cpumask_set_cpu_local_first's purpose is say "I need to spread things across N online cpus, choose the ones on this numa node first"; you call it in a loop. It can fail. One of the two callers ignores this, the other aborts and fails the device open. It can fail in two ways: allocating the off-stack cpumask, or through a convoluted codepath which AFAICT can only occur if cpu_online_mask changes. Which shouldn't happen, because if cpu_online_mask can change while you call this, it could return a now-offline cpu anyway. It contains a nonsensical test "!cpumask_of_node(numa_node)". This was drawn to my attention by Geert, who said this causes a warning on Sparc. It sets a single bit in a cpumask instead of returning a cpu number, because that's what the callers want. It could be made more efficient by passing the previous cpu rather than an index, but that would be more invasive to the callers. Fixes: da91309e Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (then rebased) Tested-by: NAmir Vadai <amirv@mellanox.com> Acked-by: NAmir Vadai <amirv@mellanox.com> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paul E. McKenney 提交于
This commit applies some warning-omission micro-optimizations to RCU's various extended-quiescent-state functions, which are on the kernel/user hotpath for CONFIG_NO_HZ_FULL=y. Reported-by: NRik van Riel <riel@redhat.com> Reported by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Currently, Kconfig will ask the user whether TASKS_RCU should be set. This is silly because Kconfig already has all the information that it needs to set this parameter. This commit therefore directly drives the value of TASKS_RCU via "select" statements. Which means that as subsystems require TASKS_RCU, those subsystems will need to add "select" statements of their own. Reported-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: NPranith Kumar <bobby.prani@gmail.com>
-
由 Paul E. McKenney 提交于
Grace-period scans of the rcu_node combining tree normally proceed quite quickly, so that it is very difficult to reproduce races against them. This commit therefore allows grace-period pre-initialization and cleanup to be artificially slowed down, increasing race-reproduction probability. A pair of pairs of new Kconfig parameters are provided, RCU_TORTURE_TEST_SLOW_PREINIT to enable the slowing down of propagating CPU-hotplug changes up the combining tree along with RCU_TORTURE_TEST_SLOW_PREINIT_DELAY to specify the delay in jiffies, and RCU_TORTURE_TEST_SLOW_CLEANUP to enable the slowing down of the end-of-grace-period cleanup scan along with RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY to specify the delay in jiffies. Boot-time parameters named rcutree.gp_preinit_delay and rcutree.gp_cleanup_delay allow these delays to be specified at boot time. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Daniel Borkmann 提交于
While 3b529602 ("test_bpf: add more eBPF jump torture cases") added the int3 bug test case only for eBPF, which needs exactly 11 passes to converge, here's a version for classic BPF with 11 passes, and one that would need 70 passes on x86_64 to actually converge for being successfully JITed. Effectively, all jumps are being optimized out resulting in a JIT image of just 89 bytes (from originally max BPF insns), only returning K. Might be useful as a receipe for folks wanting to craft a test case when backporting the fix in commit 3f7352bf ("x86: bpf_jit: fix compilation of large bpf programs") while not having eBPF. The 2nd one is delegated to the interpreter as the last pass still results in shrinking, in other words, this one won't be JITed on x86_64. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 5月, 2015 1 次提交
-
-
由 Bartosz Golaszewski 提交于
Rename topology_thread_cpumask() to topology_sibling_cpumask() for more consistency with scheduler code. Signed-off-by: NBartosz Golaszewski <bgolaszewski@baylibre.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Benoit Cousson <bcousson@baylibre.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Jean Delvare <jdelvare@suse.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Drokin <oleg.drokin@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Viresh Kumar <viresh.kumar@linaro.org> Link: http://lkml.kernel.org/r/1432645896-12588-2-git-send-email-bgolaszewski@baylibre.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 26 5月, 2015 1 次提交
-
-
由 Antonio Ospite 提交于
Signed-off-by: NAntonio Ospite <ao2@ao2.it> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 25 5月, 2015 1 次提交
-
-
由 Daniel Borkmann 提交于
Add two more eBPF test cases for JITs, i.e. the second one revealed a bug in the x86_64 JIT compiler, where only an int3 filled image from the allocator was emitted and later wrongly set by the compiler as the bpf_func program code since optimization pass boundary was surpassed w/o actually emitting opcodes. Interpreter: [ 45.782892] test_bpf: #242 BPF_MAXINSNS: Very long jump backwards jited:0 11 PASS [ 45.783062] test_bpf: #243 BPF_MAXINSNS: Edge hopping nuthouse jited:0 14705 PASS After x86_64 JIT (fixed): [ 80.495638] test_bpf: #242 BPF_MAXINSNS: Very long jump backwards jited:1 6 PASS [ 80.495957] test_bpf: #243 BPF_MAXINSNS: Edge hopping nuthouse jited:1 17157 PASS Reference: http://thread.gmane.org/gmane.linux.network/364729Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 5月, 2015 1 次提交
-
-
由 Michael Holzheu 提交于
Currently the testsuite does not have a test case with a backward jump. The s390x JIT (kernel 4.0) had a bug in that area. So add one new test case for this now. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 5月, 2015 3 次提交
-
-
由 Ingo Molnar 提交于
We already have fpu/types.h, move i387.h to fpu/api.h. The file name has become a misnomer anyway: it offers generic FPU APIs, but is not limited to i387 functionality. Reviewed-by: NBorislav Petkov <bp@alien8.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 David Hildenbrand 提交于
In general, non-atomic variants of user access functions must not sleep if pagefaults are disabled. Let's update all relevant comments in uaccess code. This also reflects the might_sleep() checks in might_fault(). Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-4-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
preempt_mask.h defines all the preempt_count semantics and related symbols: preempt, softirq, hardirq, nmi, preempt active, need resched, etc... preempt.h defines the accessors and mutators of preempt_count. But there is a messy dependency game around those two header files: * preempt_mask.h includes preempt.h in order to access preempt_count() * preempt_mask.h defines all preempt_count semantic and symbols except PREEMPT_NEED_RESCHED that is needed by asm/preempt.h Thus we need to define it from preempt.h, right before including asm/preempt.h, instead of defining it to preempt_mask.h with the other preempt_count symbols. Therefore the preempt_count semantics happen to be spread out. * We plan to introduce preempt_active_[enter,exit]() to consolidate preempt_schedule*() code. But we'll need to access both preempt_count mutators (preempt_count_add()) and preempt_count symbols (PREEMPT_ACTIVE, PREEMPT_OFFSET). The usual place to define preempt operations is in preempt.h but then we'll need symbols in preempt_mask.h which already includes preempt.h. So we end up with a ressource circle dependency. Lets merge preempt_mask.h into preempt.h to solve these dependency issues. This way we gather semantic symbols and operation definition of preempt_count in a single file. This is a dumb copy-paste merge. Further merge re-arrangments are performed in a subsequent patch to ease review. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1431441711-29753-2-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 5月, 2015 1 次提交
-
-
由 Herbert Xu 提交于
We currently have no limit on the number of elements in a hash table. This is a problem because some users (tipc) set a ceiling on the maximum table size and when that is reached the hash table may degenerate. Others may encounter OOM when growing and if we allow insertions when that happens the hash table perofrmance may also suffer. This patch adds a new paramater insecure_max_entries which becomes the cap on the table. If unset it defaults to max_size * 2. If it is also zero it means that there is no cap on the number of elements in the table. However, the table will grow whenever the utilisation hits 100% and if that growth fails, you will get ENOMEM on insertion. As allowing oversubscription is potentially dangerous, the name contains the word insecure. Note that the cap is not a hard limit. This is done for performance reasons as enforcing a hard limit will result in use of atomic ops that are heavier than the ones we currently use. The reasoning is that we're only guarding against a gross over- subscription of the table, rather than a small breach of the limit. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2015 2 次提交
-
-
由 Michael Holzheu 提交于
Fix several sparse warnings like: lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long Fixes: cffc642d ("test_bpf: add 173 new testcases for eBPF") Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
Couple of torture test cases related to the bug fixed in 0b59d880 ("ARM: net: delegate filter to kernel interpreter when imm_offset() return value can't fit into 12bits."). I've added a helper to allocate and fill the insn space. Output on x86_64 from my laptop: test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 5月, 2015 3 次提交
-
-
由 Michael Holzheu 提交于
add an exhaustive set of eBPF tests bringing total to: test_bpf: Summary: 233 PASSED, 0 FAILED, [0/226 JIT'ed] Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Streetman 提交于
Avoid 64 bit mod operation, which won't work on 32 bit systems. Simple subtraction can be used instead in this case. Reported-By: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NDan Streetman <ddstreet@ieee.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Dan Streetman 提交于
Make the do_index and do_op functions static. They are used only internally by the 842 decompression function, and should be static. Reported-By: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NDan Streetman <ddstreet@ieee.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 11 5月, 2015 2 次提交
-
-
由 Xi Wang 提交于
Extend the testcase to catch a signedness bug in the arm64 JIT: test_bpf: #58 load 64-bit immediate jited:1 ret -1 != 1 FAIL (1 times) This is useful to ensure other JITs won't have a similar bug. Link: https://lkml.org/lkml/2015/5/8/458 Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NXi Wang <xi.wang@gmail.com> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Streetman 提交于
Add 842-format software compression and decompression functions. Update the MAINTAINERS 842 section to include the new files. The 842 compression function can compress any input data into the 842 compression format. The 842 decompression function can decompress any standard-format 842 compressed data - specifically, either a compressed data buffer created by the 842 software compression function, or a compressed data buffer created by the 842 hardware compressor (located in PowerPC coprocessors). The 842 compressed data format is explained in the header comments. This is used in a later patch to provide a full software 842 compression and decompression crypto interface. Signed-off-by: NDan Streetman <ddstreet@ieee.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 06 5月, 2015 4 次提交
-
-
由 Joe Perches 提交于
The documentation shows a need for gcc > 4.9.2, but it's really >=. The Kconfig entries don't show require versions so add them. Correct a latter/later typo too. Also mention that gcc 5 required to catch out of bounds accesses to global and stack variables. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yury Norov 提交于
The file lib/find_last_bit.c was no longer used and supposed to be deleted by commit 8f6f19dd ("lib: move find_last_bit to lib/find_next_bit.c") but that delete didn't happen. This gets rid of it. Signed-off-by: NYury Norov <yury.norov@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Thomas Graf 提交于
A 64bit division went in unnoticed. Use do_div() to accomodate non 64bit architectures. Reported-by: kbuild test robot Fixes: 1aa661f5 ("rhashtable-test: Measure time to insert, remove & traverse entries") Signed-off-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Graf 提交于
Remove useless obj variable and goto logic. Signed-off-by: NThomas Graf <tgraf@suug.ch> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-