1. 09 1月, 2016 2 次提交
  2. 07 1月, 2016 1 次提交
    • M
      arm64: head.S: use memset to clear BSS · 2a803c4d
      Mark Rutland 提交于
      Currently we use an open-coded memzero to clear the BSS. As it is a
      trivial implementation, it is sub-optimal.
      
      Our optimised memset doesn't use the stack, is position-independent, and
      for the memzero case can use of DC ZVA to clear large blocks
      efficiently. In __mmap_switched the MMU is on and there are no live
      caller-saved registers, so we can safely call an uninstrumented memset.
      
      This patch changes __mmap_switched to use memset when clearing the BSS.
      We use the __pi_memset alias so as to avoid any instrumentation in all
      kernel configurations.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2a803c4d
  3. 06 1月, 2016 1 次提交
    • M
      arm64: entry: remove pointless SPSR mode check · ee03353b
      Mark Rutland 提交于
      In work_pending, we may skip work if the stacked SPSR value represents
      anything other than an EL0 context. We then immediately invoke the
      kernel_exit 0 macro as part of ret_to_user, assuming a return to EL0.
      This is somewhat confusing.
      
      We use work_pending as part of the ret_to_user/ret_fast_syscall state
      machine. We only use ret_fast_syscall in the return from an SVC issued
      from EL0. We use ret_to_user for return from EL0 exception handlers and
      also for return from ret_from_fork in the case the task was not a kernel
      thread (i.e. it is a user task).
      
      Thus in all cases the stacked SPSR value must represent an EL0 context,
      and the check is redundant. This patch removes it, along with the now
      unused no_work_pending label.
      
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee03353b
  4. 05 1月, 2016 5 次提交
  5. 22 12月, 2015 9 次提交
    • W
      arm64: perf: add support for Cortex-A72 · 5d7ee877
      Will Deacon 提交于
      Cortex-A72 has a PMUv3 implementation that is compatible with the PMU
      implemented by Cortex-A57.
      
      This patch hooks up the new compatible string so that the Cortex-A57
      event mappings are used.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5d7ee877
    • W
      arm64: perf: add format entry to describe event -> config mapping · 57d74123
      Will Deacon 提交于
      It's all very well providing an events directory to userspace that
      details our events in terms of "event=0xNN", but if we don't define how
      to encode the "event" field in the perf attr.config, then it's a waste
      of time.
      
      This patch adds a single format entry to describe that the event field
      occupies the bottom 10 bits of our config field on ARMv8 (PMUv3).
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      57d74123
    • C
      arm64: dts: Add dts files to enable ION on Hi6220 SoC. · 59dfafd0
      Chen Feng 提交于
      Add ION node to enable ION on hi6220 SoC platform
      Signed-off-by: NChen Feng <puck.chen@hisilicon.com>
      Signed-off-by: NYu Dongbin <yudongbin@hisilicon.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      59dfafd0
    • W
      arm64: traps: address fallout from printk -> pr_* conversion · c9cd0ed9
      Will Deacon 提交于
      Commit ac7b406c ("arm64: Use pr_* instead of printk") was a fairly
      mindless s/printk/pr_*/ change driven by a complaint from checkpatch.
      
      As is usual with such changes, this has led to some odd behaviour on
      arm64:
      
        * syslog now picks up the "pr_emerg" line from dump_backtrace, but not
          the actual trace, which leads to a bunch of "kernel:Call trace:"
          lines in the log
      
        * __{pte,pmd,pgd}_error print at KERN_CRIT, as opposed to KERN_ERR
          which is used by other architectures.
      
      This patch restores the original printk behaviour for dump_backtrace
      and downgrade the pgtable error macros to KERN_ERR.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c9cd0ed9
    • A
      arm64: ftrace: fix a stack tracer's output under function graph tracer · 20380bb3
      AKASHI Takahiro 提交于
      Function graph tracer modifies a return address (LR) in a stack frame
      to hook a function return. This will result in many useless entries
      (return_to_handler) showing up in
       a) a stack tracer's output
       b) perf call graph (with perf record -g)
       c) dump_backtrace (at panic et al.)
      
      For example, in case of a),
        $ echo function_graph > /sys/kernel/debug/tracing/current_tracer
        $ echo 1 > /proc/sys/kernel/stack_trace_enabled
        $ cat /sys/kernel/debug/tracing/stack_trace
              Depth    Size   Location    (54 entries)
              -----    ----   --------
        0)     4504      16   gic_raise_softirq+0x28/0x150
        1)     4488      80   smp_cross_call+0x38/0xb8
        2)     4408      48   return_to_handler+0x0/0x40
        3)     4360      32   return_to_handler+0x0/0x40
        ...
      
      In case of b),
        $ echo function_graph > /sys/kernel/debug/tracing/current_tracer
        $ perf record -e mem:XXX:x -ag -- sleep 10
        $ perf report
                        ...
                        |          |          |--0.22%-- 0x550f8
                        |          |          |          0x10888
                        |          |          |          el0_svc_naked
                        |          |          |          sys_openat
                        |          |          |          return_to_handler
                        |          |          |          return_to_handler
                        ...
      
      In case of c),
        $ echo function_graph > /sys/kernel/debug/tracing/current_tracer
        $ echo c > /proc/sysrq-trigger
        ...
        Call trace:
        [<ffffffc00044d3ac>] sysrq_handle_crash+0x24/0x30
        [<ffffffc000092250>] return_to_handler+0x0/0x40
        [<ffffffc000092250>] return_to_handler+0x0/0x40
        ...
      
      This patch replaces such entries with real addresses preserved in
      current->ret_stack[] at unwind_frame(). This way, we can cover all
      the cases.
      Reviewed-by: NJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      [will: fixed minor context changes conflicting with irq stack bits]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      20380bb3
    • A
      arm64: pass a task parameter to unwind_frame() · fe13f95b
      AKASHI Takahiro 提交于
      Function graph tracer modifies a return address (LR) in a stack frame
      to hook a function's return. This will result in many useless entries
      (return_to_handler) showing up in a call stack list.
      We will fix this problem in a later patch ("arm64: ftrace: fix a stack
      tracer's output under function graph tracer"). But since real return
      addresses are saved in ret_stack[] array in struct task_struct,
      unwind functions need to be notified of, in addition to a stack pointer
      address, which task is being traced in order to find out real return
      addresses.
      
      This patch extends unwind functions' interfaces by adding an extra
      argument of a pointer to task_struct.
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fe13f95b
    • A
      arm64: ftrace: modify a stack frame in a safe way · 79fdee9b
      AKASHI Takahiro 提交于
      Function graph tracer modifies a return address (LR) in a stack frame by
      calling ftrace_prepare_return() in a traced function's function prologue.
      The current code does this modification before preserving an original
      address at ftrace_push_return_trace() and there is always a small window
      of inconsistency when an interrupt occurs.
      
      This doesn't matter, as far as an interrupt stack is introduced, because
      stack tracer won't be invoked in an interrupt context. But it would be
      better to proactively minimize such a window by moving the LR modification
      after ftrace_push_return_trace().
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      79fdee9b
    • J
      arm64: remove irq_count and do_softirq_own_stack() · d224a69e
      James Morse 提交于
      sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
      irq_stack implementation wrongly assumed this would only ever happen
      via the softirq path, allowing it to update irq_count late, in
      do_softirq_own_stack().
      
      This means if an irq occurs in sysrq_handle_reboot(), during
      emergency_restart() the stack will be corrupted, as irq_count wasn't
      updated.
      
      Lose the optimisation, and instead of moving the adding/subtracting of
      irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
      sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
      if we are on a task stack, if so, we can safely switch to the irq stack.
      Finally, remove do_softirq_own_stack(), we don't need it anymore.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [will: use get_thread_info macro]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d224a69e
    • D
      arm64: hugetlb: add support for PTE contiguous bit · 66b3923a
      David Woods 提交于
      The arm64 MMU supports a Contiguous bit which is a hint that the TTE
      is one of a set of contiguous entries which can be cached in a single
      TLB entry.  Supporting this bit adds new intermediate huge page sizes.
      
      The set of huge page sizes available depends on the base page size.
      Without using contiguous pages the huge page sizes are as follows.
      
       4KB:   2MB  1GB
      64KB: 512MB
      
      With a 4KB granule, the contiguous bit groups together sets of 16 pages
      and with a 64KB granule it groups sets of 32 pages.  This enables two new
      huge page sizes in each case, so that the full set of available sizes
      is as follows.
      
       4KB:  64KB   2MB  32MB  1GB
      64KB:   2MB 512MB  16GB
      
      If a 16KB granule is used then the contiguous bit groups 128 pages
      at the PTE level and 32 pages at the PMD level.
      
      If the base page size is set to 64KB then 2MB pages are enabled by
      default.  It is possible in the future to make 2MB the default huge
      page size for both 4KB and 64KB granules.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NDavid Woods <dwoods@ezchip.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      66b3923a
  6. 21 12月, 2015 3 次提交
  7. 19 12月, 2015 1 次提交
    • D
      bpf: move clearing of A/X into classic to eBPF migration prologue · 8b614aeb
      Daniel Borkmann 提交于
      Back in the days where eBPF (or back then "internal BPF" ;->) was not
      exposed to user space, and only the classic BPF programs internally
      translated into eBPF programs, we missed the fact that for classic BPF
      A and X needed to be cleared. It was fixed back then via 83d5b7ef
      ("net: filter: initialize A and X registers"), and thus classic BPF
      specifics were added to the eBPF interpreter core to work around it.
      
      This added some confusion for JIT developers later on that take the
      eBPF interpreter code as an example for deriving their JIT. F.e. in
      f75298f5 ("s390/bpf: clear correct BPF accumulator register"), at
      least X could leak stack memory. Furthermore, since this is only needed
      for classic BPF translations and not for eBPF (verifier takes care
      that read access to regs cannot be done uninitialized), more complexity
      is added to JITs as they need to determine whether they deal with
      migrations or native eBPF where they can just omit clearing A/X in
      their prologue and thus reduce image size a bit, see f.e. cde66c2d
      ("s390/bpf: Only clear A and X for converted BPF programs"). In other
      cases (x86, arm64), A and X is being cleared in the prologue also for
      eBPF case, which is unnecessary.
      
      Lets move this into the BPF migration in bpf_convert_filter() where it
      actually belongs as long as the number of eBPF JITs are still few. It
      can thus be done generically; allowing us to remove the quirk from
      __bpf_prog_run() and to slightly reduce JIT image size in case of eBPF,
      while reducing code duplication on this matter in current(/future) eBPF
      JITs.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Cc: Zi Shen Lim <zlim.lnx@gmail.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Acked-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b614aeb
  8. 18 12月, 2015 3 次提交
  9. 17 12月, 2015 2 次提交
  10. 16 12月, 2015 1 次提交
    • J
      arm64: reduce stack use in irq_handler · 971c67ce
      James Morse 提交于
      The code for switching to irq_stack stores three pieces of information on
      the stack, fp+lr, as a fake stack frame (that lets us walk back onto the
      interrupted tasks stack frame), and the address of the struct pt_regs that
      contains the register values from kernel entry. (which dump_backtrace()
      will print in any stack trace).
      
      To reduce this, we store fp, and the pointer to the struct pt_regs.
      unwind_frame() can recognise this as the irq_stack dummy frame, (as it only
      appears at the top of the irq_stack), and use the struct pt_regs values
      to find the missing interrupted link-register.
      Suggested-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      971c67ce
  11. 14 12月, 2015 12 次提交