1. 25 6月, 2015 1 次提交
  2. 19 6月, 2015 4 次提交
  3. 17 6月, 2015 3 次提交
    • V
      arm64: compat: print compat_sp instead of sp · 4e2ee96a
      Vladimir Murzin 提交于
      We check against compat_sp, but print out arm64's sp - fix it.
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4e2ee96a
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
    • M
      arm64: entry: fix context tracking for el0_sp_pc · 46b0567c
      Mark Rutland 提交于
      Commit 6c81fe79 ("arm64: enable context tracking") did not
      update el0_sp_pc to use ct_user_exit, but this appears to have been
      unintentional. In commit 6ab6463a ("arm64: adjust el0_sync so
      that a function can be called") we made x0 available, and in the return
      to userspace we call ct_user_enter in the kernel_exit macro.
      
      Due to this, we currently don't correctly inform RCU of the user->kernel
      transition, and may erroneously account for time spent in the kernel as
      if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
      is enabled.
      
      As we do record the kernel->user transition, a userspace application
      making accesses from an unaligned stack pointer can demonstrate the
      imbalance, provoking the following warning:
      
      ------------[ cut here ]------------
      WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
      Modules linked in:
      CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
      Hardware name: ARM Juno development board (r0) (DT)
      Call trace:
      [<ffffffc000089914>] dump_backtrace+0x0/0x124
      [<ffffffc000089a48>] show_stack+0x10/0x1c
      [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
      [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
      [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
      [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
      [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
      [<ffffffc00008561c>] el1_preempt+0x4/0x28
      [<ffffffc0001b8040>] exit_files+0x38/0x4c
      [<ffffffc0000b5b94>] do_exit+0x430/0x978
      [<ffffffc0000b614c>] do_group_exit+0x40/0xd4
      [<ffffffc0000c0208>] get_signal+0x23c/0x4f4
      [<ffffffc0000890b4>] do_signal+0x1ac/0x518
      [<ffffffc000089650>] do_notify_resume+0x5c/0x68
      ---[ end trace 963c192600337066 ]---
      
      This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
      correcting the context tracking for this case.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      46b0567c
  4. 15 6月, 2015 1 次提交
  5. 12 6月, 2015 6 次提交
  6. 11 6月, 2015 1 次提交
  7. 09 6月, 2015 1 次提交
    • J
      arm64: fix missing syscall trace exit · 04d7e098
      Josh Stone 提交于
      If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
      the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
      the middle of the syscall, but ret_fast_syscall doesn't check this flag
      again.  This causes a ptrace syscall-exit-stop to be missed.
      
      For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
      tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
      Now the completion of the fork should have a syscall-exit-stop.
      
      Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
      fast exit path.  Do the same on arm64.
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NJosh Stone <jistone@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      04d7e098
  8. 05 6月, 2015 5 次提交
  9. 03 6月, 2015 3 次提交
  10. 02 6月, 2015 2 次提交
  11. 01 6月, 2015 1 次提交
  12. 27 5月, 2015 8 次提交
  13. 21 5月, 2015 1 次提交
  14. 20 5月, 2015 1 次提交
    • H
      arm64: perf: Fix callchain parse error with kernel tracepoint events · 5b09a094
      Hou Pengyang 提交于
      For ARM64, when tracing with tracepoint events, the IP and pstate are set
      to 0, preventing the perf code parsing the callchain and resolving the
      symbols correctly.
      
       ./perf record -e sched:sched_switch -g --call-graph dwarf ls
          [ perf record: Captured and wrote 0.146 MB perf.data ]
       ./perf report -f
          Samples: 194  of event 'sched:sched_switch', Event count (approx.): 194
          Children      Self    Command  Shared Object     Symbol
          100.00%       100.00%  ls       [unknown]         [.] 0000000000000000
      
      The fix is to implement perf_arch_fetch_caller_regs for ARM64, which fills
      several necessary registers used for callchain unwinding, including pc,sp,
      fp and spsr .
      
      With this patch, callchain can be parsed correctly as follows:
      
           ......
      +    2.63%     0.00%  ls       [kernel.kallsyms]  [k] vfs_symlink
      +    2.63%     0.00%  ls       [kernel.kallsyms]  [k] follow_down
      +    2.63%     0.00%  ls       [kernel.kallsyms]  [k] pfkey_get
      +    2.63%     0.00%  ls       [kernel.kallsyms]  [k] do_execveat_common.isra.33
      -    2.63%     0.00%  ls       [kernel.kallsyms]  [k] pfkey_send_policy_notify
           pfkey_send_policy_notify
           pfkey_get
           v9fs_vfs_rename
           page_follow_link_light
           link_path_walk
           el0_svc_naked
          .......
      Signed-off-by: NHou Pengyang <houpengyang@huawei.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5b09a094
  15. 19 5月, 2015 2 次提交