1. 24 6月, 2013 1 次提交
  2. 04 4月, 2013 1 次提交
    • K
      ARM: 7688/1: add support for context tracking subsystem · b0088480
      Kevin Hilman 提交于
      commit 91d1aa43 (context_tracking: New context tracking susbsystem)
      generalized parts of the RCU userspace extended quiescent state into
      the context tracking subsystem.  Context tracking is then used
      to implement adaptive tickless (a.k.a extended nohz)
      
      To support the new context tracking subsystem on ARM, the user/kernel
      boundary transtions need to be instrumented.
      
      For exceptions and IRQs in usermode, the existing usr_entry macro is
      used to instrument the user->kernel transition.  For the return to
      usermode path, the ret_to_user* path is instrumented.  Using the
      usr_entry macro, this covers interrupts in userspace, data abort and
      prefetch abort exceptions in userspace as well as undefined exceptions
      in userspace (which is where FP emulation and VFP are handled.)
      
      For syscalls, the slow return path is covered by instrumenting the
      ret_to_user path.  In addition, the syscall entry point is
      instrumented which covers the user->kernel transition for both fast
      and slow syscalls, and an additional instrumentation point is added
      for the fast syscall return path (ret_fast_syscall).
      
      Cc: Mats Liljegren <mats.liljegren@enea.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b0088480
  3. 03 4月, 2013 2 次提交
  4. 24 2月, 2013 1 次提交
  5. 31 7月, 2012 1 次提交
    • R
      ARM: Fix undefined instruction exception handling · 15ac49b6
      Russell King 提交于
      While trying to get a v3.5 kernel booted on the cubox, I noticed that
      VFP does not work correctly with VFP bounce handling.  This is because
      of the confusion over 16-bit vs 32-bit instructions, and where PC is
      supposed to point to.
      
      The rule is that FP handlers are entered with regs->ARM_pc pointing at
      the _next_ instruction to be executed.  However, if the exception is
      not handled, regs->ARM_pc points at the faulting instruction.
      
      This is easy for ARM mode, because we know that the next instruction and
      previous instructions are separated by four bytes.  This is not true of
      Thumb2 though.
      
      Since all FP instructions are 32-bit in Thumb2, it makes things easy.
      We just need to select the appropriate adjustment.  Do this by moving
      the adjustment out of do_undefinstr() into the assembly code, as only
      the assembly code knows whether it's dealing with a 32-bit or 16-bit
      instruction.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      15ac49b6
  6. 16 6月, 2012 1 次提交
  7. 05 5月, 2012 1 次提交
  8. 29 3月, 2012 1 次提交
  9. 14 3月, 2012 1 次提交
  10. 22 2月, 2012 1 次提交
  11. 03 2月, 2012 1 次提交
  12. 27 11月, 2011 1 次提交
  13. 16 11月, 2011 1 次提交
  14. 17 10月, 2011 2 次提交
  15. 02 7月, 2011 9 次提交
  16. 30 6月, 2011 2 次提交
  17. 29 6月, 2011 5 次提交
  18. 20 6月, 2011 1 次提交
  19. 06 6月, 2011 1 次提交
    • M
      ARM: 6952/1: fix lockdep warning of "unannotated irqs-off" · 9fc2552a
      Ming Lei 提交于
      This patch fixes the lockdep warning of "unannotated irqs-off"[1].
      
      After entering __irq_usr, arm core will disable interrupt automatically,
      but __irq_usr does not annotate the irq disable, so lockdep may complain
      the warning if it has chance to check this in irq handler.
      
      This patch adds trace_hardirqs_off in __irq_usr before entering irq_handler
      to handle the irq, also calls ret_to_user_from_irq to avoid calling
      disable_irq again.
      
      This is also a fix for irq off tracer.
      
      [1], lockdep warning log of "unannotated irqs-off"
      
      [   13.804687] ------------[ cut here ]------------
      [   13.809570] WARNING: at kernel/lockdep.c:3335 check_flags+0x78/0x1d0()
      [   13.816467] Modules linked in:
      [   13.819732] Backtrace:
      [   13.822357] [<c01cb42c>] (dump_backtrace+0x0/0x100) from [<c06abb14>] (dump_stack+0x20/0x24)
      [   13.831268]  r6:c07d8c2c r5:00000d07 r4:00000000 r3:00000000
      [   13.837280] [<c06abaf4>] (dump_stack+0x0/0x24) from [<c01ffc04>] (warn_slowpath_common+0x5c/0x74)
      [   13.846649] [<c01ffba8>] (warn_slowpath_common+0x0/0x74) from [<c01ffc48>] (warn_slowpath_null+0x2c/0x34)
      [   13.856781]  r8:00000000 r7:00000000 r6:c18b8194 r5:60000093 r4:ef182000
      [   13.863708] r3:00000009
      [   13.866485] [<c01ffc1c>] (warn_slowpath_null+0x0/0x34) from [<c0237d84>] (check_flags+0x78/0x1d0)
      [   13.875823] [<c0237d0c>] (check_flags+0x0/0x1d0) from [<c023afc8>] (lock_acquire+0x4c/0x150)
      [   13.884704] [<c023af7c>] (lock_acquire+0x0/0x150) from [<c06af638>] (_raw_spin_lock+0x4c/0x84)
      [   13.893798] [<c06af5ec>] (_raw_spin_lock+0x0/0x84) from [<c01f9a44>] (sched_ttwu_pending+0x58/0x8c)
      [   13.903320]  r6:ef92d040 r5:00000003 r4:c18b8180
      [   13.908233] [<c01f99ec>] (sched_ttwu_pending+0x0/0x8c) from [<c01f9a90>] (scheduler_ipi+0x18/0x1c)
      [   13.917663]  r6:ef183fb0 r5:00000003 r4:00000000 r3:00000001
      [   13.923645] [<c01f9a78>] (scheduler_ipi+0x0/0x1c) from [<c01bc458>] (do_IPI+0x9c/0xfc)
      [   13.932006] [<c01bc3bc>] (do_IPI+0x0/0xfc) from [<c06b0888>] (__irq_usr+0x48/0xe0)
      [   13.939971] Exception stack(0xef183fb0 to 0xef183ff8)
      [   13.945281] 3fa0:                                     ffffffc3 0001500c 00000001 0001500c
      [   13.953948] 3fc0: 00000050 400b45f0 400d9000 00000000 00000001 400d9600 6474e552 bea05b3c
      [   13.962585] 3fe0: 400d96c0 bea059c0 400b6574 400b65d8 20000010 ffffffff
      [   13.969573]  r6:00000403 r5:fa240100 r4:ffffffff r3:20000010
      [   13.975585] ---[ end trace efc4896ab0fb62cb ]---
      [   13.980468] possible reason: unannotated irqs-off.
      [   13.985534] irq event stamp: 1610
      [   13.989044] hardirqs last  enabled at (1610): [<c01c703c>] no_work_pending+0x8/0x2c
      [   13.997131] hardirqs last disabled at (1609): [<c01c7024>] ret_slow_syscall+0xc/0x1c
      [   14.005371] softirqs last  enabled at (0): [<c01fe5e4>] copy_process+0x2cc/0xa24
      [   14.013183] softirqs last disabled at (0): [<  (null)>]   (null)
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9fc2552a
  20. 12 2月, 2011 1 次提交
  21. 24 12月, 2010 2 次提交
    • M
      ARM: 6538/1: Subarch IRQ handler macros V3 · cd544ce7
      Magnus Damm 提交于
      Per subarch interrupt handler macros V3.
      
      This patch breaks out code from the irq_handler macro
      into arch_irq_handler and arch_irq_handler_default.
      
      The macros are put in the header file "entry-macro-multi.S"
      
      The arch_irq_handler_default macro is designed to be
      used by irq_handler in entry-armv.S while arch_irq_handler
      is suitable for per-subarch use.
      Signed-off-by: NMagnus Damm <damm@opensource.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cd544ce7
    • E
      ARM: 6532/1: Allow machine to specify it's own IRQ handlers at run-time · 52108641
      eric miao 提交于
      Normally different ARM platform has different way to decode the IRQ
      hardware status and demultiplex to the corresponding IRQ handler.
      This is highly optimized by macro irq_handler in entry-armv.S, and
      each machine defines their own macro to decode the IRQ number.
      However, this prevents multiple machine classes to be built into a
      single kernel.
      
      By allowing each machine to specify thier own handler, and making
      function pointer 'handle_arch_irq' to point to it at run time, this
      can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both
      solutions to work.
      
      Comparing with the highly optimized macro of irq_handler, the new
      function must be written with care not to lose too much performance.
      And the IPI stuff on SMP is expected to move to the provided arch
      IRQ handler as well.
      
      The assembly code to invoke handle_arch_irq is optimized by Russell
      King.
      Signed-off-by: NEric Miao <eric.miao@canonical.com>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      52108641
  22. 20 12月, 2010 1 次提交
    • D
      ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels. · ed3768a8
      Dave Martin 提交于
        * __fixup_smp_on_up has been modified with support for the
          THUMB2_KERNEL case.  For THUMB2_KERNEL only, fixups are split
          into halfwords in case of misalignment, since we can't rely on
          unaligned accesses working before turning the MMU on.
      
          No attempt is made to optimise the aligned case, since the
          number of fixups is typically small, and it seems best to keep
          the code as simple as possible.
      
        * Add a rotate in the fixup_smp code in order to support
          CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
      
        * Add an assembly-time sanity-check to ALT_UP() to ensure that
          the content really is the right size (4 bytes).
      
          (No check is done for ALT_SMP().  Possibly, this could be fixed
          by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
          ALT_SMP...SMP_UP_B) into two macros.  In the first case,
          ALT_SMP needs to expand to >= 4 bytes, not == 4.)
      
        * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
          to macro limitations) has not been modified: the affected
          instruction (mov) has no 16-bit encoding, so the correct
          instruction size is satisfied in this case.
      
        * A "mode" parameter has been added to smp_dmb:
      
          smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
          smp_dmb     @ uses W() to ensure 4-byte instructions for ALT_SMP()
      
          This avoids assembly failures due to use of W() inside smp_dmb,
          when assembling pure-ARM code in the vectors page.
      
          There might be a better way to achieve this.
      
        * Kconfig: make SMP_ON_UP depend on
          (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
          supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
          currently assumes little-endian order.)
      
      Tested using a single generic realview kernel on:
      	ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
      	ARM RealView PBX-A9 (SMP)
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ed3768a8
  23. 06 12月, 2010 1 次提交
    • W
      ARM: hw_breakpoint: disable preemption during debug exception handling · 7e202696
      Will Deacon 提交于
      On ARM, debug exceptions occur in the form of data or prefetch aborts.
      One difference is that debug exceptions require access to per-cpu banked
      registers and data structures which are not saved in the low-level exception
      code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario
      that the debug handler ends up running on a different CPU from the one
      that originally signalled the event, resulting in random data being read
      from the wrong registers.
      
      This patch adds a debug_entry macro to the low-level exception handling
      code which checks whether the taken exception is a debug exception. If
      it is, the preempt count for the faulting process is incremented. After
      the debug handler has finished, the count is decremented.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7e202696
  24. 04 12月, 2010 1 次提交