1. 05 2月, 2018 1 次提交
    • Y
      bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y · 09584b40
      Yonghong Song 提交于
      With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file,
      tools/testing/selftests/bpf/test_kmod.sh failed like below:
        [root@localhost bpf]# ./test_kmod.sh
        sysctl: setting key "net.core.bpf_jit_enable": Invalid argument
        [ JIT enabled:0 hardened:0 ]
        [  132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
        [  132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
        [ JIT enabled:1 hardened:0 ]
        [  133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
        [  133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
        [ JIT enabled:1 hardened:1 ]
        [  134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
        [  135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
        [ JIT enabled:1 hardened:2 ]
        [  136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
        [  136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
        [root@localhost bpf]#
      
      The test_kmod.sh load/remove test_bpf.ko multiple times with different
      settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297
      of test_bpf.ko is designed such that JIT always fails.
      
      Commit 290af866 (bpf: introduce BPF_JIT_ALWAYS_ON config)
      introduced the following tightening logic:
          ...
              if (!bpf_prog_is_dev_bound(fp->aux)) {
                      fp = bpf_int_jit_compile(fp);
          #ifdef CONFIG_BPF_JIT_ALWAYS_ON
                      if (!fp->jited) {
                              *err = -ENOTSUPP;
                              return fp;
                      }
          #endif
          ...
      With this logic, Test #297 always gets return value -ENOTSUPP
      when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.
      
      This patch fixed the failure by marking Test #297 as expected failure
      when CONFIG_BPF_JIT_ALWAYS_ON is defined.
      
      Fixes: 290af866 (bpf: introduce BPF_JIT_ALWAYS_ON config)
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      09584b40
  2. 27 1月, 2018 1 次提交
  3. 20 1月, 2018 1 次提交
  4. 10 1月, 2018 1 次提交
    • A
      bpf: introduce BPF_JIT_ALWAYS_ON config · 290af866
      Alexei Starovoitov 提交于
      The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.
      
      A quote from goolge project zero blog:
      "At this point, it would normally be necessary to locate gadgets in
      the host kernel code that can be used to actually leak data by reading
      from an attacker-controlled location, shifting and masking the result
      appropriately and then using the result of that as offset to an
      attacker-controlled address for a load. But piecing gadgets together
      and figuring out which ones work in a speculation context seems annoying.
      So instead, we decided to use the eBPF interpreter, which is built into
      the host kernel - while there is no legitimate way to invoke it from inside
      a VM, the presence of the code in the host kernel's text section is sufficient
      to make it usable for the attack, just like with ordinary ROP gadgets."
      
      To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
      option that removes interpreter from the kernel in favor of JIT-only mode.
      So far eBPF JIT is supported by:
      x64, arm64, arm32, sparc64, s390, powerpc64, mips64
      
      The start of JITed program is randomized and code page is marked as read-only.
      In addition "constant blinding" can be turned on with net.core.bpf_jit_harden
      
      v2->v3:
      - move __bpf_prog_ret0 under ifdef (Daniel)
      
      v1->v2:
      - fix init order, test_bpf and cBPF (Daniel's feedback)
      - fix offloaded bpf (Jakub's feedback)
      - add 'return 0' dummy in case something can invoke prog->bpf_func
      - retarget bpf tree. For bpf-next the patch would need one extra hunk.
        It will be sent when the trees are merged back to net-next
      
      Considered doing:
        int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
      but it seems better to land the patch as-is and in bpf-next remove
      bpf_jit_enable global variable from all JITs, consolidate in one place
      and remove this jit_init() function.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      290af866
  5. 16 12月, 2017 1 次提交
  6. 10 8月, 2017 1 次提交
    • D
      bpf: add BPF_J{LT,LE,SLT,SLE} instructions · 92b31a9a
      Daniel Borkmann 提交于
      Currently, eBPF only understands BPF_JGT (>), BPF_JGE (>=),
      BPF_JSGT (s>), BPF_JSGE (s>=) instructions, this means that
      particularly *JLT/*JLE counterparts involving immediates need
      to be rewritten from e.g. X < [IMM] by swapping arguments into
      [IMM] > X, meaning the immediate first is required to be loaded
      into a register Y := [IMM], such that then we can compare with
      Y > X. Note that the destination operand is always required to
      be a register.
      
      This has the downside of having unnecessarily increased register
      pressure, meaning complex program would need to spill other
      registers temporarily to stack in order to obtain an unused
      register for the [IMM]. Loading to registers will thus also
      affect state pruning since we need to account for that register
      use and potentially those registers that had to be spilled/filled
      again. As a consequence slightly more stack space might have
      been used due to spilling, and BPF programs are a bit longer
      due to extra code involving the register load and potentially
      required spill/fills.
      
      Thus, add BPF_JLT (<), BPF_JLE (<=), BPF_JSLT (s<), BPF_JSLE (s<=)
      counterparts to the eBPF instruction set. Modifying LLVM to
      remove the NegateCC() workaround in a PoC patch at [1] and
      allowing it to also emit the new instructions resulted in
      cilium's BPF programs that are injected into the fast-path to
      have a reduced program length in the range of 2-3% (e.g.
      accumulated main and tail call sections from one of the object
      file reduced from 4864 to 4729 insns), reduced complexity in
      the range of 10-30% (e.g. accumulated sections reduced in one
      of the cases from 116432 to 88428 insns), and reduced stack
      usage in the range of 1-5% (e.g. accumulated sections from one
      of the object files reduced from 824 to 784b).
      
      The modification for LLVM will be incorporated in a backwards
      compatible way. Plan is for LLVM to have i) a target specific
      option to offer a possibility to explicitly enable the extension
      by the user (as we have with -m target specific extensions today
      for various CPU insns), and ii) have the kernel checked for
      presence of the extensions and enable them transparently when
      the user is selecting more aggressive options such as -march=native
      in a bpf target context. (Other frontends generating BPF byte
      code, e.g. ply can probe the kernel directly for its code
      generation.)
      
        [1] https://github.com/borkmann/llvm/tree/bpf-insnsSigned-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      92b31a9a
  7. 21 6月, 2017 1 次提交
    • Y
      net: introduce __skb_put_[zero, data, u8] · de77b966
      yuan linyu 提交于
      follow Johannes Berg, semantic patch file as below,
      @@
      identifier p, p2;
      expression len;
      expression skb;
      type t, t2;
      @@
      (
      -p = __skb_put(skb, len);
      +p = __skb_put_zero(skb, len);
      |
      -p = (t)__skb_put(skb, len);
      +p = __skb_put_zero(skb, len);
      )
      ... when != p
      (
      p2 = (t2)p;
      -memset(p2, 0, len);
      |
      -memset(p, 0, len);
      )
      
      @@
      identifier p;
      expression len;
      expression skb;
      type t;
      @@
      (
      -t p = __skb_put(skb, len);
      +t p = __skb_put_zero(skb, len);
      )
      ... when != p
      (
      -memset(p, 0, len);
      )
      
      @@
      type t, t2;
      identifier p, p2;
      expression skb;
      @@
      t *p;
      ...
      (
      -p = __skb_put(skb, sizeof(t));
      +p = __skb_put_zero(skb, sizeof(t));
      |
      -p = (t *)__skb_put(skb, sizeof(t));
      +p = __skb_put_zero(skb, sizeof(t));
      )
      ... when != p
      (
      p2 = (t2)p;
      -memset(p2, 0, sizeof(*p));
      |
      -memset(p, 0, sizeof(*p));
      )
      
      @@
      expression skb, len;
      @@
      -memset(__skb_put(skb, len), 0, len);
      +__skb_put_zero(skb, len);
      
      @@
      expression skb, len, data;
      @@
      -memcpy(__skb_put(skb, len), data, len);
      +__skb_put_data(skb, data, len);
      
      @@
      expression SKB, C, S;
      typedef u8;
      identifier fn = {__skb_put};
      fresh identifier fn2 = fn ## "_u8";
      @@
      - *(u8 *)fn(SKB, S) = C;
      + fn2(SKB, C);
      Signed-off-by: Nyuan linyu <Linyu.Yuan@alcatel-sbell.com.cn>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de77b966
  8. 15 6月, 2017 1 次提交
  9. 01 6月, 2017 1 次提交
  10. 26 5月, 2017 1 次提交
  11. 03 5月, 2017 3 次提交
    • G
      test_bpf: Use ULL suffix for 64-bit constants · 86f8e247
      Geert Uytterhoeven 提交于
      On 32-bit:
      
          lib/test_bpf.c:4772: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4772: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4773: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4773: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4787: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4787: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4801: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4801: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4802: warning: integer constant is too large for ‘unsigned long’ type
          lib/test_bpf.c:4802: warning: integer constant is too large for ‘unsigned long’ type
      
      On 32-bit systems, "long" is only 32-bit.
      Replace the "UL" suffix by "ULL" to fix this.
      
      Fixes: 85f68fe8 ("bpf, arm64: implement jiting of BPF_XADD")
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86f8e247
    • D
      bpf, arm64: fix jit branch offset related to ldimm64 · ddc665a4
      Daniel Borkmann 提交于
      When the instruction right before the branch destination is
      a 64 bit load immediate, we currently calculate the wrong
      jump offset in the ctx->offset[] array as we only account
      one instruction slot for the 64 bit load immediate although
      it uses two BPF instructions. Fix it up by setting the offset
      into the right slot after we incremented the index.
      
      Before (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  54ffff82  b.cs 0x00000020
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      After (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  540000a2  b.cs 0x00000044
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      Also, add a couple of test cases to make sure JITs pass
      this test. Tested on Cavium ThunderX ARMv8. The added
      test cases all pass after the fix.
      
      Fixes: 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
      Reported-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddc665a4
    • D
      bpf, arm64: implement jiting of BPF_XADD · 85f68fe8
      Daniel Borkmann 提交于
      This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore
      completes JITing of all BPF instructions, meaning we can thus also remove
      the 'notyet' label and do not need to fall back to the interpreter when
      BPF_XADD is used in a program!
      
      This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64,
      where all current eBPF features are supported.
      
      BPF_W example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_W, R10, -40, 0x10),
          BPF_STX_XADD(BPF_W, R10, R0, -40),
          BPF_LDX_MEM(BPF_W, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7, #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  b82b6b2a  str w10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  885f7d4b  ldxr w11, [x10]
        00000040:  0b07016b  add w11, w11, w7
        00000044:  880b7d4b  stxr w11, w11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      BPF_DW example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
          BPF_STX_XADD(BPF_DW, R10, R0, -40),
          BPF_LDX_MEM(BPF_DW, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7,  #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  f82b6b2a  str x10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  c85f7d4b  ldxr x11, [x10]
        00000040:  8b07016b  add x11, x11, x7
        00000044:  c80b7d4b  stxr w11, x11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      Tested on Cavium ThunderX ARMv8, test suite results after the patch:
      
        No JIT:   [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed]
        With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85f68fe8
  12. 21 10月, 2016 1 次提交
    • D
      bpf, test: fix ld_abs + vlan push/pop stress test · 0d906b1e
      Daniel Borkmann 提交于
      After commit 636c2628 ("net: skbuff: Remove errornous length
      validation in skb_vlan_pop()") mentioned test case stopped working,
      throwing a -12 (ENOMEM) return code. The issue however is not due to
      636c2628, but rather due to a buggy test case that got uncovered
      from the change in behaviour in 636c2628.
      
      The data_size of that test case for the skb was set to 1. In the
      bpf_fill_ld_abs_vlan_push_pop() handler bpf insns are generated that
      loop with: reading skb data, pushing 68 tags, reading skb data,
      popping 68 tags, reading skb data, etc, in order to force a skb
      expansion and thus trigger that JITs recache skb->data. Problem is
      that initial data_size is too small.
      
      While before 636c2628, the test silently bailed out due to the
      skb->len < VLAN_ETH_HLEN check with returning 0, and now throwing an
      error from failing skb_ensure_writable(). Set at least minimum of
      ETH_HLEN as an initial length so that on first push of data, equivalent
      pop will succeed.
      
      Fixes: 4d9c5c53 ("test_bpf: add bpf_skb_vlan_push/pop() tests")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d906b1e
  13. 16 9月, 2016 1 次提交
  14. 17 5月, 2016 1 次提交
  15. 07 4月, 2016 4 次提交
  16. 19 12月, 2015 1 次提交
    • D
      bpf, test: add couple of test cases · 9dd2af83
      Daniel Borkmann 提交于
      Add couple of test cases for interpreter but also JITs, f.e. to test that
      when imm32 moves are being done, upper 32bits of the regs are being zero
      extended.
      
      Without JIT:
      
        [...]
        [ 1114.129301] test_bpf: #43 MOV REG64 jited:0 128 PASS
        [ 1114.130626] test_bpf: #44 MOV REG32 jited:0 139 PASS
        [ 1114.132055] test_bpf: #45 LD IMM64 jited:0 124 PASS
        [...]
      
      With JIT (generated code can as usual be nicely verified with the help of
      bpf_jit_disasm tool):
      
        [...]
        [ 1062.726782] test_bpf: #43 MOV REG64 jited:1 6 PASS
        [ 1062.726890] test_bpf: #44 MOV REG32 jited:1 6 PASS
        [ 1062.726993] test_bpf: #45 LD IMM64 jited:1 6 PASS
        [...]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9dd2af83
  17. 05 11月, 2015 1 次提交
  18. 07 8月, 2015 6 次提交
  19. 31 7月, 2015 1 次提交
  20. 21 7月, 2015 1 次提交
    • A
      test_bpf: add bpf_skb_vlan_push/pop() tests · 4d9c5c53
      Alexei Starovoitov 提交于
      improve accuracy of timing in test_bpf and add two stress tests:
      - {skb->data[0], get_smp_processor_id} repeated 2k times
      - {skb->data[0], vlan_push} x 68 followed by {skb->data[0], vlan_pop} x 68
      
      1st test is useful to test performance of JIT implementation of BPF_LD_ABS
      together with BPF_CALL instructions.
      2nd test is stressing skb_vlan_push/pop logic together with skb->data access
      via BPF_LD_ABS insn which checks that re-caching of skb->data is done correctly.
      
      In order to call bpf_skb_vlan_push() from test_bpf.ko have to add
      three export_symbol_gpl.
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d9c5c53
  21. 09 7月, 2015 1 次提交
  22. 28 5月, 2015 1 次提交
    • D
      test_bpf: add similarly conflicting jump test case only for classic · bde28bc6
      Daniel Borkmann 提交于
      While 3b529602 ("test_bpf: add more eBPF jump torture cases")
      added the int3 bug test case only for eBPF, which needs exactly 11
      passes to converge, here's a version for classic BPF with 11 passes,
      and one that would need 70 passes on x86_64 to actually converge for
      being successfully JITed. Effectively, all jumps are being optimized
      out resulting in a JIT image of just 89 bytes (from originally max
      BPF insns), only returning K.
      
      Might be useful as a receipe for folks wanting to craft a test case
      when backporting the fix in commit 3f7352bf ("x86: bpf_jit: fix
      compilation of large bpf programs") while not having eBPF. The 2nd
      one is delegated to the interpreter as the last pass still results
      in shrinking, in other words, this one won't be JITed on x86_64.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bde28bc6
  23. 25 5月, 2015 1 次提交
  24. 23 5月, 2015 1 次提交
  25. 15 5月, 2015 2 次提交
    • M
      test_bpf: fix sparse warnings · 56cbaa45
      Michael Holzheu 提交于
      Fix several sparse warnings like:
      lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long
      lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long
      
      Fixes: cffc642d ("test_bpf: add 173 new testcases for eBPF")
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56cbaa45
    • D
      test_bpf: add tests related to BPF_MAXINSNS · a4afd37b
      Daniel Borkmann 提交于
      Couple of torture test cases related to the bug fixed in 0b59d880
      ("ARM: net: delegate filter to kernel interpreter when imm_offset()
      return value can't fit into 12bits.").
      
      I've added a helper to allocate and fill the insn space. Output on
      x86_64 from my laptop:
      
      test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS
      test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS
      test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS
      test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
      test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS
      test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS
      test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS
      test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS
      
      test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS
      test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS
      test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS
      test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS
      test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS
      test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS
      test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS
      test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4afd37b
  26. 13 5月, 2015 1 次提交
  27. 11 5月, 2015 1 次提交
  28. 01 5月, 2015 1 次提交
    • D
      test_bpf: indicate whether bpf prog got jited in test suite · 327941f8
      Daniel Borkmann 提交于
      I think this is useful to verify whether a filter could be JITed or
      not in case of bpf_prog_enable >= 1, which otherwise the test suite
      doesn't tell besides taking a good peek at the performance numbers.
      
      Nicolas Schichan reported a bug in the ARM JIT compiler that rejected
      and waved the filter to the interpreter although it shouldn't have.
      Nevertheless, the test passes as expected, but such information is
      not visible.
      
      It's i.e. useful for the remaining classic JITs, but also for
      implementing remaining opcodes that are not yet present in eBPF JITs
      (e.g. ARM64 waves some of them to the interpreter). This minor patch
      allows to grep through dmesg to find those accordingly, but also
      provides a total summary, i.e.: [<X>/53 JIT'ed]
      
        # echo 1 > /proc/sys/net/core/bpf_jit_enable
        # insmod lib/test_bpf.ko
        # dmesg | grep "jited:0"
      
      dmesg example on the ARM issue with JIT rejection:
      
      [...]
      [   67.925387] test_bpf: #2 ADD_SUB_MUL_K jited:1 24 PASS
      [   67.930889] test_bpf: #3 DIV_MOD_KX jited:0 794 PASS
      [   67.943940] test_bpf: #4 AND_OR_LSH_K jited:1 20 20 PASS
      [...]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      327941f8
  29. 09 12月, 2014 1 次提交