1. 10 8月, 2017 1 次提交
  2. 01 7月, 2017 1 次提交
  3. 12 6月, 2017 1 次提交
  4. 08 6月, 2017 1 次提交
  5. 07 6月, 2017 1 次提交
  6. 01 6月, 2017 1 次提交
  7. 12 5月, 2017 1 次提交
    • D
      bpf, arm64: fix faulty emission of map access in tail calls · d8b54110
      Daniel Borkmann 提交于
      Shubham was recently asking on netdev why in arm64 JIT we don't multiply
      the index for accessing the tail call map by 8. That led me into testing
      out arm64 JIT wrt tail calls and it turned out I got a NULL pointer
      dereference on the tail call.
      
      The buggy access is at:
      
        prog = array->ptrs[index];
        if (prog == NULL)
            goto out;
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  f86a682a  ldr x10, [x1,x10]
        00000068:  f862694b  ldr x11, [x10,x2]
        0000006c:  b40000ab  cbz x11, 0x00000080
        [...]
      
      The code triggering the crash is f862694b. x1 at the time contains the
      address of the bpf array, x10 offsetof(struct bpf_array, ptrs). Meaning,
      above we load the pointer to the program at map slot 0 into x10. x10
      can then be NULL if the slot is not occupied, which we later on try to
      access with a user given offset in x2 that is the map index.
      
      Fix this by emitting the following instead:
      
        [...]
        00000060:  d2800e0a  mov x10, #0x70 // #112
        00000064:  8b0a002a  add x10, x1, x10
        00000068:  d37df04b  lsl x11, x2, #3
        0000006c:  f86b694b  ldr x11, [x10,x11]
        00000070:  b40000ab  cbz x11, 0x00000084
        [...]
      
      This basically adds the offset to ptrs to the base address of the bpf
      array we got and we later on access the map with an index * 8 offset
      relative to that. The tail call map itself is basically one large area
      with meta data at the head followed by the array of prog pointers.
      This makes tail calls working again, tested on Cavium ThunderX ARMv8.
      
      Fixes: ddb55992 ("arm64: bpf: implement bpf_tail_call() helper")
      Reported-by: NShubham Bansal <illusionist.neo@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8b54110
  8. 09 5月, 2017 1 次提交
  9. 03 5月, 2017 2 次提交
    • D
      bpf, arm64: fix jit branch offset related to ldimm64 · ddc665a4
      Daniel Borkmann 提交于
      When the instruction right before the branch destination is
      a 64 bit load immediate, we currently calculate the wrong
      jump offset in the ctx->offset[] array as we only account
      one instruction slot for the 64 bit load immediate although
      it uses two BPF instructions. Fix it up by setting the offset
      into the right slot after we incremented the index.
      
      Before (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  54ffff82  b.cs 0x00000020
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      After (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  540000a2  b.cs 0x00000044
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      Also, add a couple of test cases to make sure JITs pass
      this test. Tested on Cavium ThunderX ARMv8. The added
      test cases all pass after the fix.
      
      Fixes: 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
      Reported-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddc665a4
    • D
      bpf, arm64: implement jiting of BPF_XADD · 85f68fe8
      Daniel Borkmann 提交于
      This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore
      completes JITing of all BPF instructions, meaning we can thus also remove
      the 'notyet' label and do not need to fall back to the interpreter when
      BPF_XADD is used in a program!
      
      This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64,
      where all current eBPF features are supported.
      
      BPF_W example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_W, R10, -40, 0x10),
          BPF_STX_XADD(BPF_W, R10, R0, -40),
          BPF_LDX_MEM(BPF_W, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7, #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  b82b6b2a  str w10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  885f7d4b  ldxr w11, [x10]
        00000040:  0b07016b  add w11, w11, w7
        00000044:  880b7d4b  stxr w11, w11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      BPF_DW example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
          BPF_STX_XADD(BPF_DW, R10, R0, -40),
          BPF_LDX_MEM(BPF_DW, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7,  #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  f82b6b2a  str x10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  c85f7d4b  ldxr x11, [x10]
        00000040:  8b07016b  add x11, x11, x7
        00000044:  c80b7d4b  stxr w11, x11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      Tested on Cavium ThunderX ARMv8, test suite results after the patch:
      
        No JIT:   [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed]
        With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85f68fe8
  10. 29 4月, 2017 1 次提交
  11. 22 2月, 2017 1 次提交
  12. 18 2月, 2017 2 次提交
    • D
      bpf: make jited programs visible in traces · 74451e66
      Daniel Borkmann 提交于
      Long standing issue with JITed programs is that stack traces from
      function tracing check whether a given address is kernel code
      through {__,}kernel_text_address(), which checks for code in core
      kernel, modules and dynamically allocated ftrace trampolines. But
      what is still missing is BPF JITed programs (interpreted programs
      are not an issue as __bpf_prog_run() will be attributed to them),
      thus when a stack trace is triggered, the code walking the stack
      won't see any of the JITed ones. The same for address correlation
      done from user space via reading /proc/kallsyms. This is read by
      tools like perf, but the latter is also useful for permanent live
      tracing with eBPF itself in combination with stack maps when other
      eBPF types are part of the callchain. See offwaketime example on
      dumping stack from a map.
      
      This work tries to tackle that issue by making the addresses and
      symbols known to the kernel. The lookup from *kernel_text_address()
      is implemented through a latched RB tree that can be read under
      RCU in fast-path that is also shared for symbol/size/offset lookup
      for a specific given address in kallsyms. The slow-path iteration
      through all symbols in the seq file done via RCU list, which holds
      a tiny fraction of all exported ksyms, usually below 0.1 percent.
      Function symbols are exported as bpf_prog_<tag>, in order to aide
      debugging and attribution. This facility is currently enabled for
      root-only when bpf_jit_kallsyms is set to 1, and disabled if hardening
      is active in any mode. The rationale behind this is that still a lot
      of systems ship with world read permissions on kallsyms thus addresses
      should not get suddenly exposed for them. If that situation gets
      much better in future, we always have the option to change the
      default on this. Likewise, unprivileged programs are not allowed
      to add entries there either, but that is less of a concern as most
      such programs types relevant in this context are for root-only anyway.
      If enabled, call graphs and stack traces will then show a correct
      attribution; one example is illustrated below, where the trace is
      now visible in tooling such as perf script --kallsyms=/proc/kallsyms
      and friends.
      
      Before:
      
        7fff8166889d bpf_clone_redirect+0x80007f0020ed (/lib/modules/4.9.0-rc8+/build/vmlinux)
               f5d80 __sendmsg_nocancel+0xffff006451f1a007 (/usr/lib64/libc-2.18.so)
      
      After:
      
        7fff816688b7 bpf_clone_redirect+0x80007f002107 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fffa0575728 bpf_prog_33c45a467c9e061a+0x8000600020fb (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fffa07ef1fc cls_bpf_classify+0x8000600020dc (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff81678b68 tc_classify+0x80007f002078 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8164d40b __netif_receive_skb_core+0x80007f0025fb (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8164d718 __netif_receive_skb+0x80007f002018 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8164e565 process_backlog+0x80007f002095 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8164dc71 net_rx_action+0x80007f002231 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff81767461 __softirqentry_text_start+0x80007f0020d1 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff817658ac do_softirq_own_stack+0x80007f00201c (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff810a2c20 do_softirq+0x80007f002050 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff810a2cb5 __local_bh_enable_ip+0x80007f002085 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8168d452 ip_finish_output2+0x80007f002152 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8168ea3d ip_finish_output+0x80007f00217d (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff8168f2af ip_output+0x80007f00203f (/lib/modules/4.9.0-rc8+/build/vmlinux)
        [...]
        7fff81005854 do_syscall_64+0x80007f002054 (/lib/modules/4.9.0-rc8+/build/vmlinux)
        7fff817649eb return_from_SYSCALL_64+0x80007f002000 (/lib/modules/4.9.0-rc8+/build/vmlinux)
               f5d80 __sendmsg_nocancel+0xffff01c484812007 (/usr/lib64/libc-2.18.so)
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      74451e66
    • D
      bpf: remove stubs for cBPF from arch code · 9383191d
      Daniel Borkmann 提交于
      Remove the dummy bpf_jit_compile() stubs for eBPF JITs and make
      that a single __weak function in the core that can be overridden
      similarly to the eBPF one. Also remove stale pr_err() mentions
      of bpf_jit_compile.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9383191d
  13. 11 6月, 2016 3 次提交
    • Z
      arm64: bpf: optimize LD_ABS, LD_IND · 643c332d
      Zi Shen Lim 提交于
      Remove superfluous stack frame, saving us 3 instructions for every
      LD_ABS or LD_IND.
      Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      643c332d
    • Z
      arm64: bpf: optimize JMP_CALL · 997ce888
      Zi Shen Lim 提交于
      Remove superfluous stack frame, saving us 3 instructions for
      every JMP_CALL.
      Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      997ce888
    • Z
      arm64: bpf: implement bpf_tail_call() helper · ddb55992
      Zi Shen Lim 提交于
      Add support for JMP_CALL_X (tail call) introduced by commit 04fd61ab
      ("bpf: allow bpf programs to tail-call other bpf programs").
      
      bpf_tail_call() arguments:
        ctx   - context pointer passed to next program
        array - pointer to map which type is BPF_MAP_TYPE_PROG_ARRAY
        index - index inside array that selects specific program to run
      
      In this implementation arm64 JIT jumps into callee program after prologue,
      so callee program reuses the same stack. For tail_call_cnt, we use the
      callee-saved R26 (which was already saved/restored but previously unused
      by JIT).
      
      With this patch a tail call generates the following code on arm64:
      
        if (index >= array->map.max_entries)
            goto out;
      
        34:   mov     x10, #0x10                      // #16
        38:   ldr     w10, [x1,x10]
        3c:   cmp     w2, w10
        40:   b.ge    0x0000000000000074
      
        if (tail_call_cnt > MAX_TAIL_CALL_CNT)
            goto out;
        tail_call_cnt++;
      
        44:   mov     x10, #0x20                      // #32
        48:   cmp     x26, x10
        4c:   b.gt    0x0000000000000074
        50:   add     x26, x26, #0x1
      
        prog = array->ptrs[index];
        if (prog == NULL)
            goto out;
      
        54:   mov     x10, #0x68                      // #104
        58:   ldr     x10, [x1,x10]
        5c:   ldr     x11, [x10,x2]
        60:   cbz     x11, 0x0000000000000074
      
        goto *(prog->bpf_func + prologue_size);
      
        64:   mov     x10, #0x20                      // #32
        68:   ldr     x10, [x11,x10]
        6c:   add     x10, x10, #0x20
        70:   br      x10
        74:
      Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddb55992
  14. 18 5月, 2016 1 次提交
  15. 17 5月, 2016 3 次提交
  16. 15 5月, 2016 1 次提交
    • Z
      arm64: bpf: jit JMP_JSET_{X,K} · 98397fc5
      Zi Shen Lim 提交于
      Original implementation commit e54bcde3 ("arm64: eBPF JIT compiler")
      had the relevant code paths, but due to an oversight always fail jiting.
      
      As a result, we had been falling back to BPF interpreter whenever a BPF
      program has JMP_JSET_{X,K} instructions.
      
      With this fix, we confirm that the corresponding tests in lib/test_bpf
      continue to pass, and also jited.
      
      ...
      [    2.784553] test_bpf: #30 JSET jited:1 188 192 197 PASS
      [    2.791373] test_bpf: #31 tcpdump port 22 jited:1 325 677 625 PASS
      [    2.808800] test_bpf: #32 tcpdump complex jited:1 323 731 991 PASS
      ...
      [    3.190759] test_bpf: #237 JMP_JSET_K: if (0x3 & 0x2) return 1 jited:1 110 PASS
      [    3.192524] test_bpf: #238 JMP_JSET_K: if (0x3 & 0xffffffff) return 1 jited:1 98 PASS
      [    3.211014] test_bpf: #249 JMP_JSET_X: if (0x3 & 0x2) return 1 jited:1 120 PASS
      [    3.212973] test_bpf: #250 JMP_JSET_X: if (0x3 & 0xffffffff) return 1 jited:1 89 PASS
      ...
      
      Fixes: e54bcde3 ("arm64: eBPF JIT compiler")
      Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NYang Shi <yang.shi@linaro.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      98397fc5
  17. 18 1月, 2016 1 次提交
  18. 19 12月, 2015 1 次提交
    • D
      bpf: move clearing of A/X into classic to eBPF migration prologue · 8b614aeb
      Daniel Borkmann 提交于
      Back in the days where eBPF (or back then "internal BPF" ;->) was not
      exposed to user space, and only the classic BPF programs internally
      translated into eBPF programs, we missed the fact that for classic BPF
      A and X needed to be cleared. It was fixed back then via 83d5b7ef
      ("net: filter: initialize A and X registers"), and thus classic BPF
      specifics were added to the eBPF interpreter core to work around it.
      
      This added some confusion for JIT developers later on that take the
      eBPF interpreter code as an example for deriving their JIT. F.e. in
      f75298f5 ("s390/bpf: clear correct BPF accumulator register"), at
      least X could leak stack memory. Furthermore, since this is only needed
      for classic BPF translations and not for eBPF (verifier takes care
      that read access to regs cannot be done uninitialized), more complexity
      is added to JITs as they need to determine whether they deal with
      migrations or native eBPF where they can just omit clearing A/X in
      their prologue and thus reduce image size a bit, see f.e. cde66c2d
      ("s390/bpf: Only clear A and X for converted BPF programs"). In other
      cases (x86, arm64), A and X is being cleared in the prologue also for
      eBPF case, which is unnecessary.
      
      Lets move this into the BPF migration in bpf_convert_filter() where it
      actually belongs as long as the number of eBPF JITs are still few. It
      can thus be done generically; allowing us to remove the quirk from
      __bpf_prog_run() and to slightly reduce JIT image size in case of eBPF,
      while reducing code duplication on this matter in current(/future) eBPF
      JITs.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Zi Shen Lim <zlim.lnx@gmail.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Acked-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b614aeb
  19. 04 12月, 2015 1 次提交
  20. 19 11月, 2015 1 次提交
  21. 18 11月, 2015 1 次提交
    • Y
      arm64: bpf: make BPF prologue and epilogue align with ARM64 AAPCS · ec0738db
      Yang Shi 提交于
      Save and restore FP/LR in BPF prog prologue and epilogue, save SP to FP
      in prologue in order to get the correct stack backtrace.
      
      However, ARM64 JIT used FP (x29) as eBPF fp register, FP is subjected to
      change during function call so it may cause the BPF prog stack base address
      change too.
      
      Use x25 to replace FP as BPF stack base register (fp). Since x25 is callee
      saved register, so it will keep intact during function call.
      It is initialized in BPF prog prologue when BPF prog is started to run
      everytime. Save and restore x25/x26 in BPF prologue and epilogue to keep
      them intact for the outside of BPF. Actually, x26 is unnecessary, but SP
      requires 16 bytes alignment.
      
      So, the BPF stack layout looks like:
      
                                       high
               original A64_SP =>   0:+-----+ BPF prologue
                                      |FP/LR|
               current A64_FP =>  -16:+-----+
                                      | ... | callee saved registers
                                      +-----+
                                      |     | x25/x26
               BPF fp register => -80:+-----+
                                      |     |
                                      | ... | BPF prog stack
                                      |     |
                                      |     |
               current A64_SP =>      +-----+
                                      |     |
                                      | ... | Function call stack
                                      |     |
                                      +-----+
                                        low
      
      CC: Zi Shen Lim <zlim.lnx@gmail.com>
      CC: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec0738db
  22. 17 11月, 2015 2 次提交
  23. 07 11月, 2015 2 次提交
  24. 03 10月, 2015 1 次提交
  25. 26 6月, 2015 1 次提交
  26. 25 6月, 2015 1 次提交
    • X
      arm64: bpf: fix out-of-bounds read in bpf2a64_offset() · 8eee539d
      Xi Wang 提交于
      Problems occur when bpf_to or bpf_from has value prog->len - 1 (e.g.,
      "Very long jump backwards" in test_bpf where the last instruction is a
      jump): since ctx->offset has length prog->len, ctx->offset[bpf_to + 1]
      or ctx->offset[bpf_from + 1] will cause an out-of-bounds read, leading
      to a bogus jump offset and kernel panic.
      
      This patch moves updating ctx->offset to after calling build_insn(),
      and changes indexing to use bpf_to and bpf_from without + 1.
      
      Fixes: e54bcde3 ("arm64: eBPF JIT compiler")
      Cc: <stable@vger.kernel.org> # 3.18+
      Cc: Zi Shen Lim <zlim.lnx@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NXi Wang <xi.wang@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8eee539d
  27. 08 5月, 2015 1 次提交
    • X
      arm64: bpf: fix signedness bug in loading 64-bit immediate · 1e4df6b7
      Xi Wang 提交于
      Consider "(u64)insn1.imm << 32 | imm" in the arm64 JIT.  Since imm is
      signed 32-bit, it is sign-extended to 64-bit, losing the high 32 bits.
      The fix is to convert imm to u32 first, which will be zero-extended to
      u64 implicitly.
      
      Cc: Zi Shen Lim <zlim.lnx@gmail.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org>
      Fixes: 30d3d94c ("arm64: bpf: add 'load 64-bit immediate' instruction")
      Signed-off-by: NXi Wang <xi.wang@gmail.com>
      [will: removed non-arm64 bits and redundant casting]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1e4df6b7
  28. 04 12月, 2014 1 次提交
  29. 21 10月, 2014 4 次提交