1. 26 6月, 2015 5 次提交
    • K
      radix-tree: replace preallocated node array with linked list · 9d2a8da0
      Kirill A. Shutemov 提交于
      Currently we use per-cpu array to hold pointers to preallocated nodes.
      Let's replace it with linked list.  On x86_64 it saves 256 bytes in
      per-cpu ELF section which may translate into freeing up 2MB of memory for
      NR_CPUS==8192.
      
      [akpm@linux-foundation.org: fix comment, coding style]
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9d2a8da0
    • S
      bitmap: remove explicit newline handling using scnprintf format string · 9cf79d11
      Sudeep Holla 提交于
      bitmap_print_to_pagebuf uses scnprintf to copy the cpumask/list to page
      buffer.  It handles the newline and trailing null character explicitly.
      
      It's unnecessary and also partially duplicated as scnprintf already adds
      trailing null character.  The newline can be passed through format
      string to scnprintf.  This patch does that simplification.
      
      However theoretically there's one behavior difference: when the buffer
      is too small, the original code would still output '\n' at the end while
      the new code(with this patch) would just continue to print the formatted
      string.  Since this function is dealing with only page buffers, it's
      highly unlikely to hit that corner case.
      
      This patch will help in auditing the users of bitmap_print_to_pagebuf to
      verify that the buffer passed is large enough and get rid of it
      completely by replacing them with direct scnprintf()
      
      [akpm@linux-foundation.org: tweak comment]
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Suggested-by: NPawel Moll <Pawel.Moll@arm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9cf79d11
    • D
      lib/sort: Add 64 bit swap function · ca96ab85
      Daniel Wagner 提交于
      In case the call side is not providing a swap function, we either use a
      32 bit or a generic swap function.  When swapping around pointers on 64
      bit architectures falling back to use the generic swap function seems
      like an unnecessary waste.
      
      There at least 9 users ('sort' is of difficult to grep for) of sort()
      and all of them use the sort function without a customized swap
      function.  Furthermore, they are all using pointers to swap around:
      
      arch/x86/kernel/e820.c:sanitize_e820_map()
      arch/x86/mm/extable.c:sort_extable()
      drivers/acpi/fan.c:acpi_fan_get_fps()
      fs/btrfs/super.c:btrfs_descending_sort_devices()
      fs/xfs/libxfs/xfs_dir2_block.c:xfs_dir2_sf_to_block()
      kernel/range.c:clean_sort_range()
      mm/memcontrol.c:__mem_cgroup_usage_register_event()
      sound/pci/hda/hda_auto_parser.c:snd_hda_parse_pin_defcfg()
      sound/pci/hda/hda_auto_parser.c:sort_pins_by_sequence()
      
      Obviously, we could improve the swap for other sizes as well
      but this is overkill at this point.
      
      A simple test shows sorting a 400 element array (try to stay in one
      page) with either with u32_swap() or u64_swap() show that the theory
      actually works. This test was done on a x86_64 (Intel Xeon E5-4610)
      machine.
      
      - swap_32:
      
      NumSamples = 100; Min = 48.00; Max = 49.00
      Mean = 48.320000; Variance = 0.217600; SD = 0.466476; Median 48.000000
      each * represents a count of 1
         48.0000 -    48.1000 [    68]: ********************************************************************
         48.1000 -    48.2000 [     0]:
         48.2000 -    48.3000 [     0]:
         48.3000 -    48.4000 [     0]:
         48.4000 -    48.5000 [     0]:
         48.5000 -    48.6000 [     0]:
         48.6000 -    48.7000 [     0]:
         48.7000 -    48.8000 [     0]:
         48.8000 -    48.9000 [     0]:
         48.9000 -    49.0000 [    32]: ********************************
      
      - swap_64:
      
      NumSamples = 100; Min = 44.00; Max = 63.00
      Mean = 48.250000; Variance = 18.687500; SD = 4.322904; Median 47.000000
      each * represents a count of 1
         44.0000 -    45.9000 [    15]: ***************
         45.9000 -    47.8000 [    37]: *************************************
         47.8000 -    49.7000 [    39]: ***************************************
         49.7000 -    51.6000 [     0]:
         51.6000 -    53.5000 [     0]:
         53.5000 -    55.4000 [     0]:
         55.4000 -    57.3000 [     0]:
         57.3000 -    59.2000 [     1]: *
         59.2000 -    61.1000 [     3]: ***
         61.1000 -    63.0000 [     5]: *****
      
      - swap_72:
      
      NumSamples = 100; Min = 53.00; Max = 71.00
      Mean = 55.070000; Variance = 21.565100; SD = 4.643824; Median 53.000000
      each * represents a count of 1
         53.0000 -    54.8000 [    73]: *************************************************************************
         54.8000 -    56.6000 [     9]: *********
         56.6000 -    58.4000 [     9]: *********
         58.4000 -    60.2000 [     0]:
         60.2000 -    62.0000 [     0]:
         62.0000 -    63.8000 [     0]:
         63.8000 -    65.6000 [     0]:
         65.6000 -    67.4000 [     1]: *
         67.4000 -    69.2000 [     4]: ****
         69.2000 -    71.0000 [     4]: ****
      
      - test program:
      
      static int cmp_32(const void *a, const void *b)
      {
      	u32 l = *(u32 *)a;
      	u32 r = *(u32 *)b;
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static int cmp_64(const void *a, const void *b)
      {
      	u64 l = *(u64 *)a;
      	u64 r = *(u64 *)b;
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static int cmp_72(const void *a, const void *b)
      {
      	u32 l = get_unaligned((u32 *) a);
      	u32 r = get_unaligned((u32 *) b);
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static void init_array32(void *array)
      {
      	u32 *a = array;
      	int i;
      
      	a[0] = 3821;
      	for (i = 1; i < ARRAY_ELEMENTS; i++)
      		a[i] = next_pseudo_random32(a[i-1]);
      }
      
      static void init_array64(void *array)
      {
      	u64 *a = array;
      	int i;
      
      	a[0] = 3821;
      	for (i = 1; i < ARRAY_ELEMENTS; i++)
      		a[i] = next_pseudo_random32(a[i-1]);
      }
      
      static void init_array72(void *array)
      {
      	char *p;
      	u32 v;
      	int i;
      
      	v = 3821;
      	for (i = 0; i < ARRAY_ELEMENTS; i++) {
      		p = (char *)array + (i * 9);
      		put_unaligned(v, (u32*) p);
      		v = next_pseudo_random32(v);
      	}
      }
      
      static void sort_test(void (*init)(void *array),
      		      int (*cmp) (const void *, const void *),
      		      void *array, size_t size)
      {
      	ktime_t start, stop;
      	int i;
      
      	for (i = 0; i < 10000; i++) {
      		init(array);
      
      		local_irq_disable();
      		start = ktime_get();
      
      		sort(array, ARRAY_ELEMENTS, size, cmp, NULL);
      
      		stop = ktime_get();
      		local_irq_enable();
      
      		if (i > 10000 - 101)
      		  pr_info("%lld\n",  ktime_to_us(ktime_sub(stop, start)));
      	}
      }
      
      static void *create_array(size_t size)
      {
      	void *array;
      
      	array = kmalloc(ARRAY_ELEMENTS * size, GFP_KERNEL);
      	if (!array)
      		return NULL;
      
      	return array;
      }
      
      static int perform_test(size_t size)
      {
      	void *array;
      
      	array = create_array(size);
      	if (!array)
      		return -ENOMEM;
      
      	pr_info("test element size %d bytes\n", (int)size);
      	switch (size) {
      	case 4:
      		sort_test(init_array32, cmp_32, array, size);
      		break;
      	case 8:
      		sort_test(init_array64, cmp_64, array, size);
      		break;
      	case 9:
      		sort_test(init_array72, cmp_72, array, size);
      		break;
      	}
      	kfree(array);
      
      	return 0;
      }
      
      static int __init sort_tests_init(void)
      {
      	int err;
      
      	err = perform_test(sizeof(u32));
      	if (err)
      		return err;
      
      	err = perform_test(sizeof(u64));
      	if (err)
      		return err;
      
      	err = perform_test(sizeof(u64)+1);
      	if (err)
      		return err;
      
      	return 0;
      }
      
      static void __exit sort_tests_exit(void)
      {
      }
      
      module_init(sort_tests_init);
      module_exit(sort_tests_exit);
      
      MODULE_LICENSE("GPL v2");
      MODULE_AUTHOR("Daniel Wagner");
      MODULE_DESCRIPTION("sort perfomance tests");
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca96ab85
    • G
      hexdump: Make test data really const · 79e23d57
      Geert Uytterhoeven 提交于
      The test data arrays, containing pointers to test strings, are never
      modified, so they can be const, too.  Hence mark them "const" and
      "__initconst".
      
      This moves 28 pointers from ".init.data" to ".init.rodata".
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79e23d57
    • C
      __bitmap_parselist: fix bug in empty string handling · 2528a8b8
      Chris Metcalf 提交于
      bitmap_parselist("", &mask, nmaskbits) will erroneously set bit zero in
      the mask.  The same bug is visible in cpumask_parselist() since it is
      layered on top of the bitmask code, e.g.  if you boot with "isolcpus=",
      you will actually end up with cpu zero isolated.
      
      The bug was introduced in commit 4b060420 ("bitmap, irq: add
      smp_affinity_list interface to /proc/irq") when bitmap_parselist() was
      generalized to support userspace as well as kernelspace.
      
      Fixes: 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq")
      Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2528a8b8
  2. 19 6月, 2015 1 次提交
  3. 16 6月, 2015 1 次提交
  4. 13 6月, 2015 1 次提交
  5. 11 6月, 2015 2 次提交
  6. 07 6月, 2015 1 次提交
  7. 06 6月, 2015 1 次提交
  8. 03 6月, 2015 3 次提交
  9. 29 5月, 2015 1 次提交
    • D
      percpu_counter: batch size aware __percpu_counter_compare() · 80188b0d
      Dave Chinner 提交于
      XFS uses non-stanard batch sizes for avoiding frequent global
      counter updates on it's allocated inode counters, as they increment
      or decrement in batches of 64 inodes. Hence the standard percpu
      counter batch of 32 means that the counter is effectively a global
      counter. Currently Xfs uses a batch size of 128 so that it doesn't
      take the global lock on every single modification.
      
      However, Xfs also needs to compare accurately against zero, which
      means we need to use percpu_counter_compare(), and that has a
      hard-coded batch size of 32, and hence will spuriously fail to
      detect when it is supposed to use precise comparisons and hence
      the accounting goes wrong.
      
      Add __percpu_counter_compare() to take a custom batch size so we can
      use it sanely in XFS and factor percpu_counter_compare() to use it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      80188b0d
  10. 28 5月, 2015 5 次提交
    • R
      cpumask_set_cpu_local_first => cpumask_local_spread, lament · f36963c9
      Rusty Russell 提交于
      da91309e (cpumask: Utility function to set n'th cpu...) created a
      genuinely weird function.  I never saw it before, it went through DaveM.
      (He only does this to make us other maintainers feel better about our own
      mistakes.)
      
      cpumask_set_cpu_local_first's purpose is say "I need to spread things
      across N online cpus, choose the ones on this numa node first"; you call
      it in a loop.
      
      It can fail.  One of the two callers ignores this, the other aborts and
      fails the device open.
      
      It can fail in two ways: allocating the off-stack cpumask, or through a
      convoluted codepath which AFAICT can only occur if cpu_online_mask
      changes.  Which shouldn't happen, because if cpu_online_mask can change
      while you call this, it could return a now-offline cpu anyway.
      
      It contains a nonsensical test "!cpumask_of_node(numa_node)".  This was
      drawn to my attention by Geert, who said this causes a warning on Sparc.
      It sets a single bit in a cpumask instead of returning a cpu number,
      because that's what the callers want.
      
      It could be made more efficient by passing the previous cpu rather than
      an index, but that would be more invasive to the callers.
      
      Fixes: da91309e
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (then rebased)
      Tested-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      f36963c9
    • P
      rcu: Conditionally compile RCU's eqs warnings · 1ce46ee5
      Paul E. McKenney 提交于
      This commit applies some warning-omission micro-optimizations to RCU's
      various extended-quiescent-state functions, which are on the kernel/user
      hotpath for CONFIG_NO_HZ_FULL=y.
      Reported-by: NRik van Riel <riel@redhat.com>
      Reported by: Mike Galbraith <umgwanakikbuti@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1ce46ee5
    • P
      rcu: Directly drive TASKS_RCU from Kconfig · 82d0f4c0
      Paul E. McKenney 提交于
      Currently, Kconfig will ask the user whether TASKS_RCU should be set.
      This is silly because Kconfig already has all the information that it
      needs to set this parameter.  This commit therefore directly drives
      the value of TASKS_RCU via "select" statements.  Which means that
      as subsystems require TASKS_RCU, those subsystems will need to add
      "select" statements of their own.
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NPranith Kumar <bobby.prani@gmail.com>
      82d0f4c0
    • P
      rcu: Provide diagnostic option to slow down grace-period scans · 0f41c0dd
      Paul E. McKenney 提交于
      Grace-period scans of the rcu_node combining tree normally
      proceed quite quickly, so that it is very difficult to reproduce
      races against them.  This commit therefore allows grace-period
      pre-initialization and cleanup to be artificially slowed down,
      increasing race-reproduction probability.  A pair of pairs of new
      Kconfig parameters are provided, RCU_TORTURE_TEST_SLOW_PREINIT to
      enable the slowing down of propagating CPU-hotplug changes up the
      combining tree along with RCU_TORTURE_TEST_SLOW_PREINIT_DELAY to
      specify the delay in jiffies, and RCU_TORTURE_TEST_SLOW_CLEANUP
      to enable the slowing down of the end-of-grace-period cleanup scan
      along with RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY to specify the delay
      in jiffies.  Boot-time parameters named rcutree.gp_preinit_delay and
      rcutree.gp_cleanup_delay allow these delays to be specified at boot time.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0f41c0dd
    • D
      test_bpf: add similarly conflicting jump test case only for classic · bde28bc6
      Daniel Borkmann 提交于
      While 3b529602 ("test_bpf: add more eBPF jump torture cases")
      added the int3 bug test case only for eBPF, which needs exactly 11
      passes to converge, here's a version for classic BPF with 11 passes,
      and one that would need 70 passes on x86_64 to actually converge for
      being successfully JITed. Effectively, all jumps are being optimized
      out resulting in a JIT image of just 89 bytes (from originally max
      BPF insns), only returning K.
      
      Might be useful as a receipe for folks wanting to craft a test case
      when backporting the fix in commit 3f7352bf ("x86: bpf_jit: fix
      compilation of large bpf programs") while not having eBPF. The 2nd
      one is delegated to the interpreter as the last pass still results
      in shrinking, in other words, this one won't be JITed on x86_64.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bde28bc6
  11. 27 5月, 2015 1 次提交
  12. 26 5月, 2015 1 次提交
  13. 25 5月, 2015 1 次提交
  14. 23 5月, 2015 1 次提交
  15. 19 5月, 2015 3 次提交
    • I
      x86/fpu: Rename i387.h to fpu/api.h · df6b35f4
      Ingo Molnar 提交于
      We already have fpu/types.h, move i387.h to fpu/api.h.
      
      The file name has become a misnomer anyway: it offers generic FPU APIs,
      but is not limited to i387 functionality.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      df6b35f4
    • D
      mm/uaccess, mm/fault: Clarify that uaccess may only sleep if pagefaults are enabled · b3c395ef
      David Hildenbrand 提交于
      In general, non-atomic variants of user access functions must not sleep
      if pagefaults are disabled.
      
      Let's update all relevant comments in uaccess code. This also reflects
      the might_sleep() checks in might_fault().
      Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David.Laight@ACULAB.COM
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: airlied@linux.ie
      Cc: akpm@linux-foundation.org
      Cc: benh@kernel.crashing.org
      Cc: bigeasy@linutronix.de
      Cc: borntraeger@de.ibm.com
      Cc: daniel.vetter@intel.com
      Cc: heiko.carstens@de.ibm.com
      Cc: herbert@gondor.apana.org.au
      Cc: hocko@suse.cz
      Cc: hughd@google.com
      Cc: mst@redhat.com
      Cc: paulus@samba.org
      Cc: ralf@linux-mips.org
      Cc: schwidefsky@de.ibm.com
      Cc: yang.shi@windriver.com
      Link: http://lkml.kernel.org/r/1431359540-32227-4-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b3c395ef
    • F
      sched/preempt: Merge preempt_mask.h into preempt.h · 92cf2118
      Frederic Weisbecker 提交于
      preempt_mask.h defines all the preempt_count semantics and related
      symbols: preempt, softirq, hardirq, nmi, preempt active, need resched,
      etc...
      
      preempt.h defines the accessors and mutators of preempt_count.
      
      But there is a messy dependency game around those two header files:
      
      	* preempt_mask.h includes preempt.h in order to access preempt_count()
      
      	* preempt_mask.h defines all preempt_count semantic and symbols
      	  except PREEMPT_NEED_RESCHED that is needed by asm/preempt.h
      	  Thus we need to define it from preempt.h, right before including
      	  asm/preempt.h, instead of defining it to preempt_mask.h with the
      	  other preempt_count symbols. Therefore the preempt_count semantics
      	  happen to be spread out.
      
      	* We plan to introduce preempt_active_[enter,exit]() to consolidate
      	  preempt_schedule*() code. But we'll need to access both preempt_count
      	  mutators (preempt_count_add()) and preempt_count symbols
      	  (PREEMPT_ACTIVE, PREEMPT_OFFSET). The usual place to define preempt
      	  operations is in preempt.h but then we'll need symbols in
      	  preempt_mask.h which already includes preempt.h. So we end up with
      	  a ressource circle dependency.
      
      Lets merge preempt_mask.h into preempt.h to solve these dependency issues.
      This way we gather semantic symbols and operation definition of
      preempt_count in a single file.
      
      This is a dumb copy-paste merge. Further merge re-arrangments are
      performed in a subsequent patch to ease review.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1431441711-29753-2-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      92cf2118
  16. 17 5月, 2015 1 次提交
    • H
      rhashtable: Add cap on number of elements in hash table · 07ee0722
      Herbert Xu 提交于
      We currently have no limit on the number of elements in a hash table.
      This is a problem because some users (tipc) set a ceiling on the
      maximum table size and when that is reached the hash table may
      degenerate.  Others may encounter OOM when growing and if we allow
      insertions when that happens the hash table perofrmance may also
      suffer.
      
      This patch adds a new paramater insecure_max_entries which becomes
      the cap on the table.  If unset it defaults to max_size * 2.  If
      it is also zero it means that there is no cap on the number of
      elements in the table.  However, the table will grow whenever the
      utilisation hits 100% and if that growth fails, you will get ENOMEM
      on insertion.
      
      As allowing oversubscription is potentially dangerous, the name
      contains the word insecure.
      
      Note that the cap is not a hard limit.  This is done for performance
      reasons as enforcing a hard limit will result in use of atomic ops
      that are heavier than the ones we currently use.
      
      The reasoning is that we're only guarding against a gross over-
      subscription of the table, rather than a small breach of the limit.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07ee0722
  17. 15 5月, 2015 2 次提交
    • M
      test_bpf: fix sparse warnings · 56cbaa45
      Michael Holzheu 提交于
      Fix several sparse warnings like:
      lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long
      lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long
      
      Fixes: cffc642d ("test_bpf: add 173 new testcases for eBPF")
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56cbaa45
    • D
      test_bpf: add tests related to BPF_MAXINSNS · a4afd37b
      Daniel Borkmann 提交于
      Couple of torture test cases related to the bug fixed in 0b59d880
      ("ARM: net: delegate filter to kernel interpreter when imm_offset()
      return value can't fit into 12bits.").
      
      I've added a helper to allocate and fill the insn space. Output on
      x86_64 from my laptop:
      
      test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS
      test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS
      test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS
      test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
      test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS
      test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS
      test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS
      test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS
      
      test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS
      test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS
      test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS
      test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
      test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS
      test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS
      test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS
      test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4afd37b
  18. 13 5月, 2015 3 次提交
  19. 11 5月, 2015 2 次提交
  20. 06 5月, 2015 4 次提交