1. 01 8月, 2019 1 次提交
  2. 01 7月, 2019 1 次提交
  3. 21 6月, 2019 1 次提交
    • J
      arm64: Fix incorrect irqflag restore for priority masking · bd82d4bd
      Julien Thierry 提交于
      When using IRQ priority masking to disable interrupts, in order to deal
      with the PSR.I state, local_irq_save() would convert the I bit into a
      PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore()
      potentially modifying the value of PMR in undesired location due to the
      state of PSR.I upon flag saving [1].
      
      In an attempt to solve this issue in a less hackish manner, introduce
      a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent
      whether PSR.I is being used to disable interrupts, in which case it
      takes precedence of the status of interrupt masking via PMR.
      
      GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> |
      GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as
      some sections (e.g. arch_cpu_idle(), interrupt acknowledge path)
      requires PMR not to mask interrupts that could be signaled to the
      CPU when using only PSR.I.
      
      [1] https://www.spinics.net/lists/arm-kernel/msg716956.html
      
      Fixes: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking")
      Cc: <stable@vger.kernel.org> # 5.1.x-
      Reported-by: NZenghui Yu <yuzenghui@huawei.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Wei Li <liwei391@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd82d4bd
  4. 19 6月, 2019 1 次提交
  5. 13 4月, 2019 1 次提交
  6. 06 2月, 2019 2 次提交
  7. 19 10月, 2018 1 次提交
  8. 15 9月, 2018 1 次提交
  9. 06 7月, 2018 3 次提交
    • M
      arm64: remove unused COMPAT_PSR definitions · 7373fed2
      Mark Rutland 提交于
      Now that users have been migrated to PSR_AA32, kill the unused
      COMPAT_PSR definitions.
      
      The only difference we need a definition for is COMPAT_PSR_DIT_BIT,
      which differs from PSR_AA32_DIT_BIT.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7373fed2
    • M
      arm64: use PSR_AA32 definitions · d64567f6
      Mark Rutland 提交于
      Some code cares about the SPSR_ELx format for exceptions taken from
      AArch32 to inspect or manipulate the SPSR_ELx value, which is already in
      the SPSR_ELx format, and not in the AArch32 PSR format.
      
      To separate these from cases where we care about the AArch32 PSR format,
      migrate these cases to use the PSR_AA32_* definitions rather than
      COMPAT_PSR_*.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d64567f6
    • M
      arm64: add PSR_AA32_* definitions · 25086263
      Mark Rutland 提交于
      The AArch32 CPSR/SPSR format is *almost* identical to the AArch64
      SPSR_ELx format for exceptions taken from AArch32, but the two have
      diverged with the addition of DIT, and we need to treat the two as
      logically distinct.
      
      This patch adds new definitions for the SPSR_ELx format for exceptions
      taken from AArch32, with a consistent PSR_AA32_ prefix. The existing
      COMPAT_PSR_ definitions will be used for the PSR format as seen from
      AArch32.
      
      Definitions of DIT are provided for both, and inline functions are
      provided to map between the two formats. Note that for SPSR_ELx, the
      (RES0) J bit has been re-allocated as the DIT bit.
      
      Once users of the COMPAT_PSR definitions have been migrated over to the
      PSR_AA32 definitions, the (majority of) the former will be removed, so
      no efforts is made to avoid duplication until then.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Suzuki Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      25086263
  10. 09 8月, 2017 1 次提交
    • A
      arm64: unwind: reference pt_regs via embedded stack frame · 73267498
      Ard Biesheuvel 提交于
      As it turns out, the unwind code is slightly broken, and probably has
      been for a while. The problem is in the dumping of the exception stack,
      which is intended to dump the contents of the pt_regs struct at each
      level in the call stack where an exception was taken and routed to a
      routine marked as __exception (which means its stack frame is right
      below the pt_regs struct on the stack).
      
      'Right below the pt_regs struct' is ill defined, though: the unwind
      code assigns 'frame pointer + 0x10' to the .sp member of the stackframe
      struct at each level, and dump_backtrace() happily dereferences that as
      the pt_regs pointer when encountering an __exception routine. However,
      the actual size of the stack frame created by this routine (which could
      be one of many __exception routines we have in the kernel) is not known,
      and so frame.sp is pretty useless to figure out where struct pt_regs
      really is.
      
      So it seems the only way to ensure that we can find our struct pt_regs
      when walking the stack frames is to put it at a known fixed offset of
      the stack frame pointer that is passed to such __exception routines.
      The simplest way to do that is to put it inside pt_regs itself, which is
      the main change implemented by this patch. As a bonus, doing this allows
      us to get rid of a fair amount of cruft related to walking from one stack
      to the other, which is especially nice since we intend to introduce yet
      another stack for overflow handling once we add support for vmapped
      stacks. It also fixes an inconsistency where we only add a stack frame
      pointing to ELR_EL1 if we are executing from the IRQ stack but not when
      we are executing from the task stack.
      
      To consistly identify exceptions regs even in the presence of exceptions
      taken from entry code, we must check whether the next frame was created
      by entry text, rather than whether the current frame was crated by
      exception text.
      
      To avoid backtracing using PCs that fall in the idmap, or are controlled
      by userspace, we must explcitly zero the FP and LR in startup paths, and
      must ensure that the frame embedded in pt_regs is zeroed upon entry from
      EL0. To avoid these NULL entries showin in the backtrace, unwind_frame()
      is updated to avoid them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [Mark: compare current frame against .entry.text, avoid bogus PCs]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      73267498
  11. 07 8月, 2017 2 次提交
    • D
      arm64: Abstract syscallno manipulation · 17c28958
      Dave Martin 提交于
      The -1 "no syscall" value is written in various ways, shared with
      the user ABI in some places, and generally obscure.
      
      This patch attempts to make things a little more consistent and
      readable by replacing all these uses with a single #define.  A
      couple of symbolic helpers are provided to clarify the intent
      further.
      
      Because the in-syscall check in do_signal() is changed from >= 0 to
      != NO_SYSCALL by this patch, different behaviour may be observable
      if syscallno is set to values less than -1 by a tracer.  However,
      this is not different from the behaviour that is already observable
      if a tracer sets syscallno to a value >= __NR_(compat_)syscalls.
      
      It appears that this can cause spurious syscall restarting, but
      that is not a new behaviour either, and does not appear harmful.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      17c28958
    • D
      arm64: syscallno is secretly an int, make it official · 35d0e6fb
      Dave Martin 提交于
      The upper 32 bits of the syscallno field in thread_struct are
      handled inconsistently, being sometimes zero extended and sometimes
      sign-extended.  In fact, only the lower 32 bits seem to have any
      real significance for the behaviour of the code: it's been OK to
      handle the upper bits inconsistently because they don't matter.
      
      Currently, the only place I can find where those bits are
      significant is in calling trace_sys_enter(), which may be
      unintentional: for example, if a compat tracer attempts to cancel a
      syscall by passing -1 to (COMPAT_)PTRACE_SET_SYSCALL at the
      syscall-enter-stop, it will be traced as syscall 4294967295
      rather than -1 as might be expected (and as occurs for a native
      tracer doing the same thing).  Elsewhere, reads of syscallno cast
      it to an int or truncate it.
      
      There's also a conspicuous amount of code and casting to bodge
      around the fact that although semantically an int, syscallno is
      stored as a u64.
      
      Let's not pretend any more.
      
      In order to preserve the stp x instruction that stores the syscall
      number in entry.S, this patch special-cases the layout of struct
      pt_regs for big endian so that the newly 32-bit syscallno field
      maps onto the low bits of the stored value.  This is not beautiful,
      but benchmarking of the getpid syscall on Juno suggests indicates a
      minor slowdown if the stp is split into an stp x and stp w.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      35d0e6fb
  12. 15 2月, 2017 1 次提交
    • M
      arm64: ptrace: add XZR-safe regs accessors · 6c23e2ff
      Mark Rutland 提交于
      In A64, XZR and the SP share the same encoding (31), and whether an
      instruction accesses XZR or SP for a particular register parameter
      depends on the definition of the instruction.
      
      We store the SP in pt_regs::regs[31], and thus when emulating
      instructions, we must be careful to not erroneously read from or write
      back to the saved SP. Unfortunately, we often fail to be this careful.
      
      In all cases, instructions using a transfer register parameter Xt use
      this to refer to XZR rather than SP. This patch adds helpers so that we
      can more easily and consistently handle these cases.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6c23e2ff
  13. 08 11月, 2016 1 次提交
    • P
      arm64: Add uprobe support · 9842ceae
      Pratyush Anand 提交于
      This patch adds support for uprobe on ARM64 architecture.
      
      Unit tests for following have been done so far and they have been found
      working
          1. Step-able instructions, like sub, ldr, add etc.
          2. Simulation-able like ret, cbnz, cbz etc.
          3. uretprobe
          4. Reject-able instructions like sev, wfe etc.
          5. trapped and abort xol path
          6. probe at unaligned user address.
          7. longjump test cases
      
      Currently it does not support aarch32 instruction probing.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9842ceae
  14. 27 7月, 2016 1 次提交
  15. 19 7月, 2016 3 次提交
    • V
      arm64: ptrace: remove extra define for CPSR's E bit · 9df53ff2
      Vladimir Murzin 提交于
      ...and do not confuse source navigation tools ;)
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9df53ff2
    • S
      arm64: Kprobes with single stepping support · 2dd0e8d2
      Sandeepa Prabhu 提交于
      Add support for basic kernel probes(kprobes) and jump probes
      (jprobes) for ARM64.
      
      Kprobes utilizes software breakpoint and single step debug
      exceptions supported on ARM v8.
      
      A software breakpoint is placed at the probe address to trap the
      kernel execution into the kprobe handler.
      
      ARM v8 supports enabling single stepping before the break exception
      return (ERET), with next PC in exception return address (ELR_EL1). The
      kprobe handler prepares an executable memory slot for out-of-line
      execution with a copy of the original instruction being probed, and
      enables single stepping. The PC is set to the out-of-line slot address
      before the ERET. With this scheme, the instruction is executed with the
      exact same register context except for the PC (and DAIF) registers.
      
      Debug mask (PSTATE.D) is enabled only when single stepping a recursive
      kprobe, e.g.: during kprobes reenter so that probed instruction can be
      single stepped within the kprobe handler -exception- context.
      The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
      any further re-entry is prevented by not calling handlers and the case
      counted as a missed kprobe).
      
      Single stepping from the x-o-l slot has a drawback for PC-relative accesses
      like branching and symbolic literals access as the offset from the new PC
      (slot address) may not be ensured to fit in the immediate value of
      the opcode. Such instructions need simulation, so reject
      probing them.
      
      Instructions generating exceptions or cpu mode change are rejected
      for probing.
      
      Exclusive load/store instructions are rejected too.  Additionally, the
      code is checked to see if it is inside an exclusive load/store sequence
      (code from Pratyush).
      
      System instructions are mostly enabled for stepping, except MSR/MRS
      accesses to "DAIF" flags in PSTATE, which are not safe for
      probing.
      
      This also changes arch/arm64/include/asm/ptrace.h to use
      include/asm-generic/ptrace.h.
      
      Thanks to Steve Capper and Pratyush Anand for several suggested
      Changes.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2dd0e8d2
    • D
      arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature · 0a8ea52c
      David A. Long 提交于
      Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting
      functions and defines.
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      [catalin.marinas@arm.com: Remove unused functions]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0a8ea52c
  16. 07 7月, 2016 1 次提交
  17. 02 3月, 2016 1 次提交
    • M
      arm64: Rework valid_user_regs · dbd4d7ca
      Mark Rutland 提交于
      We validate pstate using PSR_MODE32_BIT, which is part of the
      user-provided pstate (and cannot be trusted). Also, we conflate
      validation of AArch32 and AArch64 pstate values, making the code
      difficult to reason about.
      
      Instead, validate the pstate value based on the associated task. The
      task may or may not be current (e.g. when using ptrace), so this must be
      passed explicitly by callers. To avoid circular header dependencies via
      sched.h, is_compat_task is pulled out of asm/ptrace.h.
      
      To make the code possible to reason about, the AArch64 and AArch32
      validation is split into separate functions. Software must respect the
      RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
      policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
      architected meaning writes may be permitted (potentially with additional
      validation).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbd4d7ca
  18. 30 10月, 2015 1 次提交
  19. 27 7月, 2015 1 次提交
  20. 24 1月, 2015 1 次提交
    • S
      arm64: Emulate SETEND for AArch32 tasks · 2d888f48
      Suzuki K. Poulose 提交于
      Emulate deprecated 'setend' instruction for AArch32 bit tasks.
      
      	setend [le/be] - Sets the endianness of EL0
      
      On systems with CPUs which support mixed endian at EL0, the hardware
      support for the instruction can be enabled by setting the SCTLR_EL1.SED
      bit. Like the other emulated instructions it is controlled by an entry in
      /proc/sys/abi/. For more information see :
      	Documentation/arm64/legacy_instructions.txt
      
      The instruction is emulated by setting/clearing the SPSR_EL1.E bit, which
      will be reflected in the PSTATE.E in AArch32 context.
      
      This patch also restores the native endianness for the execution of signal
      handlers, since the process could have changed the endianness.
      
      Note: All CPUs on the system must have mixed endian support at EL0. Once the
      handler is registered, hotplugging a CPU which doesn't support mixed endian,
      could lead to unexpected results/behavior in applications.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Punit Agrawal <punit.agrawal@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2d888f48
  21. 29 8月, 2014 1 次提交
  22. 04 7月, 2014 1 次提交
  23. 12 5月, 2014 1 次提交
  24. 13 3月, 2014 2 次提交
  25. 26 2月, 2014 1 次提交
  26. 25 10月, 2013 1 次提交
  27. 12 6月, 2013 1 次提交
  28. 30 1月, 2013 1 次提交
  29. 05 12月, 2012 2 次提交
  30. 11 10月, 2012 2 次提交
  31. 27 9月, 2012 1 次提交