1. 24 9月, 2016 1 次提交
    • S
      arm64: arch_timer: Work around QorIQ Erratum A-008585 · f6dc1576
      Scott Wood 提交于
      Erratum A-008585 says that the ARM generic timer counter "has the
      potential to contain an erroneous value for a small number of core
      clock cycles every time the timer value changes".  Accesses to TVAL
      (both read and write) are also affected due to the implicit counter
      read.  Accesses to CVAL are not affected.
      
      The workaround is to reread TVAL and count registers until successive
      reads return the same value.  Writes to TVAL are replaced with an
      equivalent write to CVAL.
      
      The workaround is to reread TVAL and count registers until successive reads
      return the same value, and when writing TVAL to retry until counter
      reads before and after the write return the same value.
      
      The workaround is enabled if the fsl,erratum-a008585 property is found in
      the timer node in the device tree.  This can be overridden with the
      clocksource.arm_arch_timer.fsl-a008585 boot parameter, which allows KVM
      users to enable the workaround until a mechanism is implemented to
      automatically communicate this information.
      
      This erratum can be found on LS1043A and LS2080A.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NScott Wood <oss@buserror.net>
      [will: renamed read macro to reflect that it's not usually unstable]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f6dc1576
  2. 22 9月, 2016 1 次提交
  3. 20 9月, 2016 1 次提交
  4. 17 9月, 2016 3 次提交
  5. 15 9月, 2016 1 次提交
    • D
      arm64: Improve kprobes test for atomic sequence · 3e593f66
      David A. Long 提交于
      Kprobes searches backwards a finite number of instructions to determine if
      there is an attempt to probe a load/store exclusive sequence. It stops when
      it hits the maximum number of instructions or a load or store exclusive.
      However this means it can run up past the beginning of the function and
      start looking at literal constants. This has been shown to cause a false
      positive and blocks insertion of the probe. To fix this, further limit the
      backwards search to stop if it hits a symbol address from kallsyms. The
      presumption is that this is the entry point to this code (particularly for
      the common case of placing probes at the beginning of functions).
      
      This also improves efficiency by not searching code that is not part of the
      function. There may be some possibility that the label might not denote the
      entry path to the probed instruction but the likelihood seems low and this
      is just another example of how the kprobes user really needs to be
      careful about what they are doing.
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3e593f66
  6. 12 9月, 2016 3 次提交
    • M
      arm64/kvm: use alternative auto-nop · e506236a
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our open-coded NOP sleds, making the code simpler to read.
      
      Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved
      out of the alternative sequence, and in the default case there will be
      four additional NOPs executed.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e506236a
    • M
      arm64: use alternative auto-nop · 6ba3b554
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our homebew NOP sleds, making the code simpler to read.
      
      Note that for cpu_do_switch_mm the ret has been moved out of the
      alternative sequence, and in the default case there will be three
      additional NOPs executed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ba3b554
    • M
      arm64: alternative: add auto-nop infrastructure · 792d4737
      Mark Rutland 提交于
      In some cases, one side of an alternative sequence is simply a number of
      NOPs used to balance the other side. Keeping track of this manually is
      tedious, and the presence of large chains of NOPs makes the code more
      painful to read than necessary.
      
      To ameliorate matters, this patch adds a new alternative_else_nop_endif,
      which automatically balances an alternative sequence with a trivial NOP
      sled.
      
      In many cases, we would like a NOP-sled in the default case, and
      instructions patched in in the presence of a feature. To enable the NOPs
      to be generated automatically for this case, this patch also adds a new
      alternative_if, and updates alternative_else and alternative_endif to
      work with either alternative_if or alternative_endif.
      
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [will: use new nops macro to generate nop sequences]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      792d4737
  7. 10 9月, 2016 3 次提交
  8. 09 9月, 2016 22 次提交
  9. 08 9月, 2016 1 次提交
    • R
      arm64/io: Allow I/O writes to use {W,X}ZR · ee5e41b5
      Robin Murphy 提交于
      When zeroing an I/O location, the current accessors are forced to
      allocate a temporary register to store the zero for the write. By
      tweaking the assembly constraints, we can allow the compiler to use
      the zero register directly in such cases, and save some juggling.
      Compiling a representative kernel configuration with GCC 6 shows
      that 2.3KB worth of code can be wasted just on that!
      
        text     data    bss      dec      hex     filename
       13316776 3248256 18176769 34741801 2121e29 vmlinux.o.new
       13319140 3248256 18176769 34744165 2122765 vmlinux.o.old
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ee5e41b5
  10. 07 9月, 2016 2 次提交
  11. 05 9月, 2016 2 次提交
    • P
      arm64: ftrace: add save_stack_trace_regs() · 98ab10e9
      Pratyush Anand 提交于
      Currently, enabling stacktrace of a kprobe events generates warning:
      
        echo stacktrace > /sys/kernel/debug/tracing/trace_options
        echo "p xhci_irq" > /sys/kernel/debug/tracing/kprobe_events
        echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
      
      save_stack_trace_regs() not implemented yet.
      ------------[ cut here ]------------
      WARNING: CPU: 1 PID: 0 at ../kernel/stacktrace.c:74 save_stack_trace_regs+0x3c/0x48
      Modules linked in:
      
      CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-rc4-dirty #5128
      Hardware name: ARM Juno development board (r1) (DT)
      task: ffff800975dd1900 task.stack: ffff800975ddc000
      PC is at save_stack_trace_regs+0x3c/0x48
      LR is at save_stack_trace_regs+0x3c/0x48
      pc : [<ffff000008126c64>] lr : [<ffff000008126c64>] pstate: 600003c5
      sp : ffff80097ef52c00
      
      Call trace:
         save_stack_trace_regs+0x3c/0x48
         __ftrace_trace_stack+0x168/0x208
         trace_buffer_unlock_commit_regs+0x5c/0x7c
         kprobe_trace_func+0x308/0x3d8
         kprobe_dispatcher+0x58/0x60
         kprobe_breakpoint_handler+0xbc/0x18c
         brk_handler+0x50/0x90
         do_debug_exception+0x50/0xbc
      
      This patch implements save_stack_trace_regs(), so that stacktrace of a
      kprobe events can be obtained.
      
      After this patch, there is no warning and we can see the stacktrace for
      kprobe events in trace buffer.
      
      more /sys/kernel/debug/tracing/trace
                <idle>-0     [004] d.h.  1356.000496: p_xhci_irq_0:(xhci_irq+0x0/0x9ac)
                <idle>-0     [004] d.h.  1356.000497: <stack trace>
        => xhci_irq
        => __handle_irq_event_percpu
        => handle_irq_event_percpu
        => handle_irq_event
        => handle_fasteoi_irq
        => generic_handle_irq
        => __handle_domain_irq
        => gic_handle_irq
        => el1_irq
        => arch_cpu_idle
        => default_idle_call
        => cpu_startup_entry
        => secondary_start_kernel
        =>
      Tested-by: NDavid A. Long <dave.long@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      98ab10e9
    • A
      arm64: kernel: re-export _cpu_resume() from sleep.S · dc002475
      Ard Biesheuvel 提交于
      Commit b5fe2429 ("arm64: kernel: fix style issues in sleep.S")
      changed the linkage of _cpu_resume() to local, even though the symbol
      is also referenced from hibernate.c. So revert this change.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dc002475