1. 29 7月, 2021 1 次提交
    • D
      bpf: Introduce BPF nospec instruction for mitigating Spectre v4 · f5e81d11
      Daniel Borkmann 提交于
      In case of JITs, each of the JIT backends compiles the BPF nospec instruction
      /either/ to a machine instruction which emits a speculation barrier /or/ to
      /no/ machine instruction in case the underlying architecture is not affected
      by Speculative Store Bypass or has different mitigations in place already.
      
      This covers both x86 and (implicitly) arm64: In case of x86, we use 'lfence'
      instruction for mitigation. In case of arm64, we rely on the firmware mitigation
      as controlled via the ssbd kernel parameter. Whenever the mitigation is enabled,
      it works for all of the kernel code with no need to provide any additional
      instructions here (hence only comment in arm64 JIT). Other archs can follow
      as needed. The BPF nospec instruction is specifically targeting Spectre v4
      since i) we don't use a serialization barrier for the Spectre v1 case, and
      ii) mitigation instructions for v1 and v4 might be different on some archs.
      
      The BPF nospec is required for a future commit, where the BPF verifier does
      annotate intermediate BPF programs with speculation barriers.
      Co-developed-by: NPiotr Krysiuk <piotras@gmail.com>
      Co-developed-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NPiotr Krysiuk <piotras@gmail.com>
      Signed-off-by: NBenedict Schlueter <benedict.schlueter@rub.de>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      f5e81d11
  2. 11 6月, 2021 1 次提交
  3. 18 5月, 2021 1 次提交
  4. 13 5月, 2021 1 次提交
  5. 15 1月, 2021 1 次提交
  6. 17 9月, 2020 1 次提交
    • I
      arm64: bpf: Fix branch offset in JIT · 32f6865c
      Ilias Apalodimas 提交于
      Running the eBPF test_verifier leads to random errors looking like this:
      
      [ 6525.735488] Unexpected kernel BRK exception at EL1
      [ 6525.735502] Internal error: ptrace BRK handler: f2000100 [#1] SMP
      [ 6525.741609] Modules linked in: nls_utf8 cifs libdes libarc4 dns_resolver fscache binfmt_misc nls_ascii nls_cp437 vfat fat aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce gf128mul efi_pstore sha2_ce sha256_arm64 sha1_ce evdev efivars efivarfs ip_tables x_tables autofs4 btrfs blake2b_generic xor xor_neon zstd_compress raid6_pq libcrc32c crc32c_generic ahci xhci_pci libahci xhci_hcd igb libata i2c_algo_bit nvme realtek usbcore nvme_core scsi_mod t10_pi netsec mdio_devres of_mdio gpio_keys fixed_phy libphy gpio_mb86s7x
      [ 6525.787760] CPU: 3 PID: 7881 Comm: test_verifier Tainted: G        W         5.9.0-rc1+ #47
      [ 6525.796111] Hardware name: Socionext SynQuacer E-series DeveloperBox, BIOS build #1 Jun  6 2020
      [ 6525.804812] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--)
      [ 6525.810390] pc : bpf_prog_c3d01833289b6311_F+0xc8/0x9f4
      [ 6525.815613] lr : bpf_prog_d53bb52e3f4483f9_F+0x38/0xc8c
      [ 6525.820832] sp : ffff8000130cbb80
      [ 6525.824141] x29: ffff8000130cbbb0 x28: 0000000000000000
      [ 6525.829451] x27: 000005ef6fcbf39b x26: 0000000000000000
      [ 6525.834759] x25: ffff8000130cbb80 x24: ffff800011dc7038
      [ 6525.840067] x23: ffff8000130cbd00 x22: ffff0008f624d080
      [ 6525.845375] x21: 0000000000000001 x20: ffff800011dc7000
      [ 6525.850682] x19: 0000000000000000 x18: 0000000000000000
      [ 6525.855990] x17: 0000000000000000 x16: 0000000000000000
      [ 6525.861298] x15: 0000000000000000 x14: 0000000000000000
      [ 6525.866606] x13: 0000000000000000 x12: 0000000000000000
      [ 6525.871913] x11: 0000000000000001 x10: ffff8000000a660c
      [ 6525.877220] x9 : ffff800010951810 x8 : ffff8000130cbc38
      [ 6525.882528] x7 : 0000000000000000 x6 : 0000009864cfa881
      [ 6525.887836] x5 : 00ffffffffffffff x4 : 002880ba1a0b3e9f
      [ 6525.893144] x3 : 0000000000000018 x2 : ffff8000000a4374
      [ 6525.898452] x1 : 000000000000000a x0 : 0000000000000009
      [ 6525.903760] Call trace:
      [ 6525.906202]  bpf_prog_c3d01833289b6311_F+0xc8/0x9f4
      [ 6525.911076]  bpf_prog_d53bb52e3f4483f9_F+0x38/0xc8c
      [ 6525.915957]  bpf_dispatcher_xdp_func+0x14/0x20
      [ 6525.920398]  bpf_test_run+0x70/0x1b0
      [ 6525.923969]  bpf_prog_test_run_xdp+0xec/0x190
      [ 6525.928326]  __do_sys_bpf+0xc88/0x1b28
      [ 6525.932072]  __arm64_sys_bpf+0x24/0x30
      [ 6525.935820]  el0_svc_common.constprop.0+0x70/0x168
      [ 6525.940607]  do_el0_svc+0x28/0x88
      [ 6525.943920]  el0_sync_handler+0x88/0x190
      [ 6525.947838]  el0_sync+0x140/0x180
      [ 6525.951154] Code: d4202000 d4202000 d4202000 d4202000 (d4202000)
      [ 6525.957249] ---[ end trace cecc3f93b14927e2 ]---
      
      The reason is the offset[] creation and later usage, while building
      the eBPF body. The code currently omits the first instruction, since
      build_insn() will increase our ctx->idx before saving it.
      That was fine up until bounded eBPF loops were introduced. After that
      introduction, offset[0] must be the offset of the end of prologue which
      is the start of the 1st insn while, offset[n] holds the
      offset of the end of n-th insn.
      
      When "taken loop with back jump to 1st insn" test runs, it will
      eventually call bpf2a64_offset(-1, 2, ctx). Since negative indexing is
      permitted, the current outcome depends on the value stored in
      ctx->offset[-1], which has nothing to do with our array.
      If the value happens to be 0 the tests will work. If not this error
      triggers.
      
      commit 7c2e988f ("bpf: fix x64 JIT code generation for jmp to 1st insn")
      fixed an indentical bug on x86 when eBPF bounded loops were introduced.
      
      So let's fix it by creating the ctx->offset[] differently. Track the
      beginning of instruction and account for the extra instruction while
      calculating the arm instruction offsets.
      
      Fixes: 2589726d ("bpf: introduce bounded loops")
      Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Reported-by: NJiri Olsa <jolsa@kernel.org>
      Co-developed-by: NJean-Philippe Brucker <jean-philippe@linaro.org>
      Co-developed-by: NYauheni Kaliuta <yauheni.kaliuta@redhat.com>
      Signed-off-by: NJean-Philippe Brucker <jean-philippe@linaro.org>
      Signed-off-by: NYauheni Kaliuta <yauheni.kaliuta@redhat.com>
      Signed-off-by: NIlias Apalodimas <ilias.apalodimas@linaro.org>
      Acked-by: NWill Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20200917084925.177348-1-ilias.apalodimas@linaro.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      32f6865c
  7. 31 7月, 2020 1 次提交
  8. 11 5月, 2020 2 次提交
  9. 08 5月, 2020 1 次提交
  10. 03 9月, 2019 1 次提交
  11. 25 6月, 2019 1 次提交
  12. 19 6月, 2019 1 次提交
  13. 27 4月, 2019 2 次提交
  14. 27 1月, 2019 1 次提交
  15. 12 12月, 2018 1 次提交
  16. 05 12月, 2018 1 次提交
    • A
      arm64/bpf: don't allocate BPF JIT programs in module memory · 91fc957c
      Ard Biesheuvel 提交于
      The arm64 module region is a 128 MB region that is kept close to
      the core kernel, in order to ensure that relative branches are
      always in range. So using the same region for programs that do
      not have this restriction is wasteful, and preferably avoided.
      
      Now that the core BPF JIT code permits the alloc/free routines to
      be overridden, implement them by vmalloc()/vfree() calls from a
      dedicated 128 MB region set aside for BPF programs. This ensures
      that BPF programs are still in branching range of each other, which
      is something the JIT currently depends upon (and is not guaranteed
      when using module_alloc() on KASLR kernels like we do currently).
      It also ensures that placement of BPF programs does not correlate
      with the placement of the core kernel or modules, making it less
      likely that leaking the former will reveal the latter.
      
      This also solves an issue under KASAN, where shadow memory is
      needlessly allocated for all BPF programs (which don't require KASAN
      shadow pages since they are not KASAN instrumented)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      91fc957c
  17. 30 11月, 2018 1 次提交
  18. 27 11月, 2018 1 次提交
    • D
      bpf, arm64: fix getting subprog addr from aux for calls · 8c11ea5c
      Daniel Borkmann 提交于
      The arm64 JIT has the same issue as ppc64 JIT in that the relative BPF
      to BPF call offset can be too far away from core kernel in that relative
      encoding into imm is not sufficient and could potentially be truncated,
      see also fd045f6c ("arm64: add support for module PLTs") which adds
      spill-over space for module_alloc() and therefore bpf_jit_binary_alloc().
      Therefore, use the recently added bpf_jit_get_func_addr() helper for
      properly fetching the address through prog->aux->func[off]->bpf_func
      instead. This also has the benefit to optimize normal helper calls since
      their address can use the optimized emission. Tested on Cavium ThunderX
      CN8890.
      
      Fixes: db496944 ("bpf: arm64: add JIT support for multi-function programs")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      8c11ea5c
  19. 15 5月, 2018 3 次提交
    • D
      bpf, arm64: save 4 bytes in prologue when ebpf insns came from cbpf · 56ea6a8b
      Daniel Borkmann 提交于
      We can trivially save 4 bytes in prologue for cBPF since tail calls
      can never be used from there. The register push/pop is pairwise,
      here, x25 (fp) and x26 (tcc), so no point in changing that, only
      reset to zero is not needed.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      56ea6a8b
    • D
      bpf, arm64: optimize 32/64 immediate emission · 6d2eea6f
      Daniel Borkmann 提交于
      Improve the JIT to emit 64 and 32 bit immediates, the current
      algorithm is not optimal and we often emit more instructions
      than actually needed. arm64 has movz, movn, movk variants but
      for the current 64 bit immediates we only use movz with a
      series of movk when needed.
      
      For example loading ffffffffffffabab emits the following 4
      instructions in the JIT today:
      
        * movz: abab, shift:  0, result: 000000000000abab
        * movk: ffff, shift: 16, result: 00000000ffffabab
        * movk: ffff, shift: 32, result: 0000ffffffffabab
        * movk: ffff, shift: 48, result: ffffffffffffabab
      
      Whereas after the patch the same load only needs a single
      instruction:
      
        * movn: 5454, shift:  0, result: ffffffffffffabab
      
      Another example where two extra instructions can be saved:
      
        * movz: abab, shift:  0, result: 000000000000abab
        * movk: 1f2f, shift: 16, result: 000000001f2fabab
        * movk: ffff, shift: 32, result: 0000ffff1f2fabab
        * movk: ffff, shift: 48, result: ffffffff1f2fabab
      
      After the patch:
      
        * movn: e0d0, shift: 16, result: ffffffff1f2fffff
        * movk: abab, shift:  0, result: ffffffff1f2fabab
      
      Another example with movz, before:
      
        * movz: 0000, shift:  0, result: 0000000000000000
        * movk: fea0, shift: 32, result: 0000fea000000000
      
      After:
      
        * movz: fea0, shift: 32, result: 0000fea000000000
      
      Moreover, reuse emit_a64_mov_i() for 32 bit immediates that
      are loaded via emit_a64_mov_i64() which is a similar optimization
      as done in 6fe8b9c1 ("bpf, x64: save several bytes by using
      mov over movabsq when possible"). On arm64, the latter allows to
      use a single instruction with movn due to zero extension where
      otherwise two would be needed. And last but not least add a
      missing optimization in emit_a64_mov_i() where movn is used but
      the subsequent movk not needed. With some of the Cilium programs
      in use, this shrinks the needed instructions by about three
      percent. Tested on Cavium ThunderX CN8890.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      6d2eea6f
    • D
      bpf, arm64: save 4 bytes of unneeded stack space · 09ece3d0
      Daniel Borkmann 提交于
      Follow-up to 816d9ef3 ("bpf, arm64: remove ld_abs/ld_ind") in
      that the extra 4 byte JIT scratchpad is not needed anymore since it
      was in ld_abs/ld_ind as stack buffer for bpf_load_pointer().
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      09ece3d0
  20. 04 5月, 2018 1 次提交
  21. 23 2月, 2018 1 次提交
    • D
      bpf, arm64: fix out of bounds access in tail call · 16338a9b
      Daniel Borkmann 提交于
      I recently noticed a crash on arm64 when feeding a bogus index
      into BPF tail call helper. The crash would not occur when the
      interpreter is used, but only in case of JIT. Output looks as
      follows:
      
        [  347.007486] Unable to handle kernel paging request at virtual address fffb850e96492510
        [...]
        [  347.043065] [fffb850e96492510] address between user and kernel address ranges
        [  347.050205] Internal error: Oops: 96000004 [#1] SMP
        [...]
        [  347.190829] x13: 0000000000000000 x12: 0000000000000000
        [  347.196128] x11: fffc047ebe782800 x10: ffff808fd7d0fd10
        [  347.201427] x9 : 0000000000000000 x8 : 0000000000000000
        [  347.206726] x7 : 0000000000000000 x6 : 001c991738000000
        [  347.212025] x5 : 0000000000000018 x4 : 000000000000ba5a
        [  347.217325] x3 : 00000000000329c4 x2 : ffff808fd7cf0500
        [  347.222625] x1 : ffff808fd7d0fc00 x0 : ffff808fd7cf0500
        [  347.227926] Process test_verifier (pid: 4548, stack limit = 0x000000007467fa61)
        [  347.235221] Call trace:
        [  347.237656]  0xffff000002f3a4fc
        [  347.240784]  bpf_test_run+0x78/0xf8
        [  347.244260]  bpf_prog_test_run_skb+0x148/0x230
        [  347.248694]  SyS_bpf+0x77c/0x1110
        [  347.251999]  el0_svc_naked+0x30/0x34
        [  347.255564] Code: 9100075a d280220a 8b0a002a d37df04b (f86b694b)
        [...]
      
      In this case the index used in BPF r3 is the same as in r1
      at the time of the call, meaning we fed a pointer as index;
      here, it had the value 0xffff808fd7cf0500 which sits in x2.
      
      While I found tail calls to be working in general (also for
      hitting the error cases), I noticed the following in the code
      emission:
      
        # bpftool p d j i 988
        [...]
        38:   ldr     w10, [x1,x10]
        3c:   cmp     w2, w10
        40:   b.ge    0x000000000000007c              <-- signed cmp
        44:   mov     x10, #0x20                      // #32
        48:   cmp     x26, x10
        4c:   b.gt    0x000000000000007c
        50:   add     x26, x26, #0x1
        54:   mov     x10, #0x110                     // #272
        58:   add     x10, x1, x10
        5c:   lsl     x11, x2, #3
        60:   ldr     x11, [x10,x11]                  <-- faulting insn (f86b694b)
        64:   cbz     x11, 0x000000000000007c
        [...]
      
      Meaning, the tests passed because commit ddb55992 ("arm64:
      bpf: implement bpf_tail_call() helper") was using signed compares
      instead of unsigned which as a result had the test wrongly passing.
      
      Change this but also the tail call count test both into unsigned
      and cap the index as u32. Latter we did as well in 90caccdd
      ("bpf: fix bpf_tail_call() x64 JIT") and is needed in addition here,
      too. Tested on HiSilicon Hi1616.
      
      Result after patch:
      
        # bpftool p d j i 268
        [...]
        38:	ldr	w10, [x1,x10]
        3c:	add	w2, w2, #0x0
        40:	cmp	w2, w10
        44:	b.cs	0x0000000000000080
        48:	mov	x10, #0x20                  	// #32
        4c:	cmp	x26, x10
        50:	b.hi	0x0000000000000080
        54:	add	x26, x26, #0x1
        58:	mov	x10, #0x110                 	// #272
        5c:	add	x10, x1, x10
        60:	lsl	x11, x2, #3
        64:	ldr	x11, [x10,x11]
        68:	cbz	x11, 0x0000000000000080
        [...]
      
      Fixes: ddb55992 ("arm64: bpf: implement bpf_tail_call() helper")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      16338a9b
  22. 27 1月, 2018 1 次提交
  23. 20 1月, 2018 1 次提交
  24. 17 1月, 2018 1 次提交
    • D
      bpf, arm64: fix stack_depth tracking in combination with tail calls · a2284d91
      Daniel Borkmann 提交于
      Using dynamic stack_depth tracking in arm64 JIT is currently broken in
      combination with tail calls. In prologue, we cache ctx->stack_size and
      adjust SP reg for setting up function call stack, and tearing it down
      again in epilogue. Problem is that when doing a tail call, the cached
      ctx->stack_size might not be the same.
      
      One way to fix the problem with minimal overhead is to re-adjust SP in
      emit_bpf_tail_call() and properly adjust it to the current program's
      ctx->stack_size. Tested on Cavium ThunderX ARMv8.
      
      Fixes: f1c9eed7 ("bpf, arm64: take advantage of stack_depth tracking")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      a2284d91
  25. 19 12月, 2017 1 次提交
  26. 18 12月, 2017 2 次提交
    • A
      bpf: arm64: add JIT support for multi-function programs · db496944
      Alexei Starovoitov 提交于
      similar to x64 add support for bpf-to-bpf calls.
      When program has calls to in-kernel helpers the target call offset
      is known at JIT time and arm64 architecture needs 2 passes.
      With bpf-to-bpf calls the dynamically allocated function start
      is unknown until all functions of the program are JITed.
      Therefore (just like x64) arm64 JIT needs one extra pass over
      the program to emit correct call offsets.
      
      Implementation detail:
      Avoid being too clever in 64-bit immediate moves and
      always use 4 instructions (instead of 3-4 depending on the address)
      to make sure only one extra pass is needed.
      If some future optimization would make it worth while to optimize
      'call 64-bit imm' further, the JIT would need to do 4 passes
      over the program instead of 3 as in this patch.
      For typical bpf program address the mov needs 3 or 4 insns,
      so unconditional 4 insns to save extra pass is a worthy trade off
      at this state of JIT.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      db496944
    • A
      bpf: fix net.core.bpf_jit_enable race · 60b58afc
      Alexei Starovoitov 提交于
      global bpf_jit_enable variable is tested multiple times in JITs,
      blinding and verifier core. The malicious root can try to toggle
      it while loading the programs. This race condition was accounted
      for and there should be no issues, but it's safer to avoid
      this race condition.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      60b58afc
  27. 10 8月, 2017 1 次提交
  28. 01 7月, 2017 1 次提交
  29. 12 6月, 2017 1 次提交
  30. 08 6月, 2017 1 次提交
  31. 07 6月, 2017 1 次提交
  32. 01 6月, 2017 1 次提交
  33. 12 5月, 2017 1 次提交
    • D
      bpf, arm64: fix faulty emission of map access in tail calls · d8b54110
      Daniel Borkmann 提交于
      Shubham was recently asking on netdev why in arm64 JIT we don't multiply
      the index for accessing the tail call map by 8. That led me into testing
      out arm64 JIT wrt tail calls and it turned out I got a NULL pointer
      dereference on the tail call.
      
      The buggy access is at:
      
        prog = array->ptrs[index];
        if (prog == NULL)
            goto out;
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  f86a682a  ldr x10, [x1,x10]
        00000068:  f862694b  ldr x11, [x10,x2]
        0000006c:  b40000ab  cbz x11, 0x00000080
        [...]
      
      The code triggering the crash is f862694b. x1 at the time contains the
      address of the bpf array, x10 offsetof(struct bpf_array, ptrs). Meaning,
      above we load the pointer to the program at map slot 0 into x10. x10
      can then be NULL if the slot is not occupied, which we later on try to
      access with a user given offset in x2 that is the map index.
      
      Fix this by emitting the following instead:
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  8b0a002a  add x10, x1, x10
        00000068:  d37df04b  lsl x11, x2, #3
        0000006c:  f86b694b  ldr x11, [x10,x11]
        00000070:  b40000ab  cbz x11, 0x00000084
        [...]
      
      This basically adds the offset to ptrs to the base address of the bpf
      array we got and we later on access the map with an index * 8 offset
      relative to that. The tail call map itself is basically one large area
      with meta data at the head followed by the array of prog pointers.
      This makes tail calls working again, tested on Cavium ThunderX ARMv8.
      
      Fixes: ddb55992 ("arm64: bpf: implement bpf_tail_call() helper")
      Reported-by: NShubham Bansal <illusionist.neo@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8b54110
  34. 09 5月, 2017 1 次提交
  35. 03 5月, 2017 1 次提交
    • D
      bpf, arm64: fix jit branch offset related to ldimm64 · ddc665a4
      Daniel Borkmann 提交于
      When the instruction right before the branch destination is
      a 64 bit load immediate, we currently calculate the wrong
      jump offset in the ctx->offset[] array as we only account
      one instruction slot for the 64 bit load immediate although
      it uses two BPF instructions. Fix it up by setting the offset
      into the right slot after we incremented the index.
      
      Before (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  54ffff82  b.cs 0x00000020
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      After (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  540000a2  b.cs 0x00000044
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      Also, add a couple of test cases to make sure JITs pass
      this test. Tested on Cavium ThunderX ARMv8. The added
      test cases all pass after the fix.
      
      Fixes: 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
      Reported-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddc665a4