1. 04 5月, 2020 1 次提交
  2. 06 11月, 2019 1 次提交
  3. 19 6月, 2019 1 次提交
  4. 24 5月, 2019 2 次提交
  5. 27 4月, 2019 1 次提交
  6. 28 11月, 2018 1 次提交
  7. 06 7月, 2018 1 次提交
    • W
      arm64: insn: Don't fallback on nosync path for general insn patching · 693350a7
      Will Deacon 提交于
      Patching kernel instructions at runtime requires other CPUs to undergo
      a context synchronisation event via an explicit ISB or an IPI in order
      to ensure that the new instructions are visible. This is required even
      for "hotpatch" instructions such as NOP and BL, so avoid optimising in
      this case and always go via stop_machine() when performing general
      patching.
      
      ftrace isn't quite as strict, so it can continue to call the nosync
      code directly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      693350a7
  8. 19 3月, 2018 3 次提交
  9. 26 5月, 2017 1 次提交
    • T
      arm64: Prevent cpu hotplug rwsem recursion · c23a4656
      Thomas Gleixner 提交于
      The text patching functions which are invoked from jump_label and kprobes
      code are protected against cpu hotplug at the call sites.
      
      Use stop_machine_cpuslocked() to avoid recursion on the cpu hotplug
      rwsem. stop_machine_cpuslocked() contains a lockdep assertion to catch any
      unprotected callers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/20170524081549.197070135@linutronix.de
      c23a4656
  10. 03 5月, 2017 1 次提交
    • D
      bpf, arm64: implement jiting of BPF_XADD · 85f68fe8
      Daniel Borkmann 提交于
      This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore
      completes JITing of all BPF instructions, meaning we can thus also remove
      the 'notyet' label and do not need to fall back to the interpreter when
      BPF_XADD is used in a program!
      
      This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64,
      where all current eBPF features are supported.
      
      BPF_W example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_W, R10, -40, 0x10),
          BPF_STX_XADD(BPF_W, R10, R0, -40),
          BPF_LDX_MEM(BPF_W, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7, #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  b82b6b2a  str w10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  885f7d4b  ldxr w11, [x10]
        00000040:  0b07016b  add w11, w11, w7
        00000044:  880b7d4b  stxr w11, w11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      BPF_DW example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
          BPF_STX_XADD(BPF_DW, R10, R0, -40),
          BPF_LDX_MEM(BPF_DW, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7,  #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  f82b6b2a  str x10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  c85f7d4b  ldxr x11, [x10]
        00000040:  8b07016b  add x11, x11, x7
        00000044:  c80b7d4b  stxr w11, x11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      Tested on Cavium ThunderX ARMv8, test suite results after the patch:
      
        No JIT:   [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed]
        With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85f68fe8
  11. 11 1月, 2017 1 次提交
  12. 09 9月, 2016 1 次提交
  13. 19 7月, 2016 3 次提交
    • S
      arm64: Kprobes with single stepping support · 2dd0e8d2
      Sandeepa Prabhu 提交于
      Add support for basic kernel probes(kprobes) and jump probes
      (jprobes) for ARM64.
      
      Kprobes utilizes software breakpoint and single step debug
      exceptions supported on ARM v8.
      
      A software breakpoint is placed at the probe address to trap the
      kernel execution into the kprobe handler.
      
      ARM v8 supports enabling single stepping before the break exception
      return (ERET), with next PC in exception return address (ELR_EL1). The
      kprobe handler prepares an executable memory slot for out-of-line
      execution with a copy of the original instruction being probed, and
      enables single stepping. The PC is set to the out-of-line slot address
      before the ERET. With this scheme, the instruction is executed with the
      exact same register context except for the PC (and DAIF) registers.
      
      Debug mask (PSTATE.D) is enabled only when single stepping a recursive
      kprobe, e.g.: during kprobes reenter so that probed instruction can be
      single stepped within the kprobe handler -exception- context.
      The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
      any further re-entry is prevented by not calling handlers and the case
      counted as a missed kprobe).
      
      Single stepping from the x-o-l slot has a drawback for PC-relative accesses
      like branching and symbolic literals access as the offset from the new PC
      (slot address) may not be ensured to fit in the immediate value of
      the opcode. Such instructions need simulation, so reject
      probing them.
      
      Instructions generating exceptions or cpu mode change are rejected
      for probing.
      
      Exclusive load/store instructions are rejected too.  Additionally, the
      code is checked to see if it is inside an exclusive load/store sequence
      (code from Pratyush).
      
      System instructions are mostly enabled for stepping, except MSR/MRS
      accesses to "DAIF" flags in PSTATE, which are not safe for
      probing.
      
      This also changes arch/arm64/include/asm/ptrace.h to use
      include/asm-generic/ptrace.h.
      
      Thanks to Steve Capper and Pratyush Anand for several suggested
      Changes.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2dd0e8d2
    • D
      arm64: add conditional instruction simulation support · 2af3ec08
      David A. Long 提交于
      Cease using the arm32 arm_check_condition() function and replace it with
      a local version for use in deprecated instruction support on arm64. Also
      make the function table used by this available for future use by kprobes
      and/or uprobes.
      
      This function is derived from code written by Sandeepa Prabhu.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2af3ec08
    • D
      arm64: Add more test functions to insn.c · d59bee88
      David A. Long 提交于
      Certain instructions are hard to execute correctly out-of-line (as in
      kprobes).  Test functions are added to insn.[hc] to identify these.  The
      instructions include any that use PC-relative addressing, change the PC,
      or change interrupt masking. For efficiency and simplicity test
      functions are also added for small collections of related instructions.
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d59bee88
  14. 03 6月, 2015 1 次提交
    • M
      arm64: insn: Add aarch64_{get,set}_branch_offset · 10b48f7e
      Marc Zyngier 提交于
      In order to deal with branches located in alternate sequences,
      but pointing to the main kernel text, it is required to extract
      the relative displacement encoded in the instruction, and to be
      able to update said instruction with a new offset (once it is
      known).
      
      For this, we introduce three new helpers:
      - aarch64_insn_is_branch_imm is a predicate indicating if the
        instruction is an immediate branch
      - aarch64_get_branch_offset returns a signed value representing
        the byte offset encoded in a branch instruction
      - aarch64_set_branch_offset takes an instruction and an offset,
        and returns the corresponding updated instruction.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      10b48f7e
  15. 30 3月, 2015 1 次提交
  16. 23 2月, 2015 1 次提交
  17. 21 11月, 2014 3 次提交
  18. 08 9月, 2014 13 次提交
  19. 29 5月, 2014 1 次提交
  20. 08 1月, 2014 2 次提交