1. 26 9月, 2016 1 次提交
    • M
      arm64: fix dump_backtrace/unwind_frame with NULL tsk · b5e7307d
      Mark Rutland 提交于
      In some places, dump_backtrace() is called with a NULL tsk parameter,
      e.g. in bug_handler() in arch/arm64, or indirectly via show_stack() in
      core code. The expectation is that this is treated as if current were
      passed instead of NULL. Similar is true of unwind_frame().
      
      Commit a80a0eb7 ("arm64: make irq_stack_ptr more robust") didn't
      take this into account. In dump_backtrace() it compares tsk against
      current *before* we check if tsk is NULL, and in unwind_frame() we never
      set tsk if it is NULL.
      
      Due to this, we won't initialise irq_stack_ptr in either function. In
      dump_backtrace() this results in calling dump_mem() for memory
      immediately above the IRQ stack range, rather than for the relevant
      range on the task stack. In unwind_frame we'll reject unwinding frames
      on the IRQ stack.
      
      In either case this results in incomplete or misleading backtrace
      information, but is not otherwise problematic. The initial percpu areas
      (including the IRQ stacks) are allocated in the linear map, and dump_mem
      uses __get_user(), so we shouldn't access anything with side-effects,
      and will handle holes safely.
      
      This patch fixes the issue by having both functions handle the NULL tsk
      case before doing anything else with tsk.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: a80a0eb7 ("arm64: make irq_stack_ptr more robust")
      Acked-by: NJames Morse <james.morse@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b5e7307d
  2. 24 9月, 2016 2 次提交
  3. 23 9月, 2016 1 次提交
  4. 22 9月, 2016 1 次提交
  5. 20 9月, 2016 1 次提交
  6. 17 9月, 2016 4 次提交
  7. 15 9月, 2016 1 次提交
    • D
      arm64: Improve kprobes test for atomic sequence · 3e593f66
      David A. Long 提交于
      Kprobes searches backwards a finite number of instructions to determine if
      there is an attempt to probe a load/store exclusive sequence. It stops when
      it hits the maximum number of instructions or a load or store exclusive.
      However this means it can run up past the beginning of the function and
      start looking at literal constants. This has been shown to cause a false
      positive and blocks insertion of the probe. To fix this, further limit the
      backwards search to stop if it hits a symbol address from kallsyms. The
      presumption is that this is the entry point to this code (particularly for
      the common case of placing probes at the beginning of functions).
      
      This also improves efficiency by not searching code that is not part of the
      function. There may be some possibility that the label might not denote the
      entry path to the probed instruction but the likelihood seems low and this
      is just another example of how the kprobes user really needs to be
      careful about what they are doing.
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3e593f66
  8. 12 9月, 2016 3 次提交
    • M
      arm64/kvm: use alternative auto-nop · e506236a
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our open-coded NOP sleds, making the code simpler to read.
      
      Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved
      out of the alternative sequence, and in the default case there will be
      four additional NOPs executed.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e506236a
    • M
      arm64: use alternative auto-nop · 6ba3b554
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our homebew NOP sleds, making the code simpler to read.
      
      Note that for cpu_do_switch_mm the ret has been moved out of the
      alternative sequence, and in the default case there will be three
      additional NOPs executed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ba3b554
    • M
      arm64: alternative: add auto-nop infrastructure · 792d4737
      Mark Rutland 提交于
      In some cases, one side of an alternative sequence is simply a number of
      NOPs used to balance the other side. Keeping track of this manually is
      tedious, and the presence of large chains of NOPs makes the code more
      painful to read than necessary.
      
      To ameliorate matters, this patch adds a new alternative_else_nop_endif,
      which automatically balances an alternative sequence with a trivial NOP
      sled.
      
      In many cases, we would like a NOP-sled in the default case, and
      instructions patched in in the presence of a feature. To enable the NOPs
      to be generated automatically for this case, this patch also adds a new
      alternative_if, and updates alternative_else and alternative_endif to
      work with either alternative_if or alternative_endif.
      
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [will: use new nops macro to generate nop sequences]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      792d4737
  9. 10 9月, 2016 3 次提交
  10. 09 9月, 2016 23 次提交