1. 18 2月, 2011 1 次提交
    • R
      ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching · dc21af99
      Russell King 提交于
      This idea came from Nicolas, Eric Miao produced an initial version,
      which was then rewritten into this.
      
      Patch the physical to virtual translations at runtime.  As we modify
      the code, this makes it incompatible with XIP kernels, but allows us
      to achieve this with minimal loss of performance.
      
      As many translations are of the form:
      
      	physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
      	virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
      
      we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
      instruction for __phys_to_virt().  We calculate at run time (PHYS_OFFSET
      - PAGE_OFFSET) by comparing the address prior to MMU initialization with
      where it should be once the MMU has been initialized, and place this
      constant into the above add/sub instructions.
      
      Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
      PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
      the C-mode PHYS_OFFSET variable definition to use.
      
      At present, we are unable to support Realview with Sparsemem enabled
      as this uses a complex mapping function, and MSM as this requires a
      constant which will not fit in our math instruction.
      
      Add a module version magic string for this feature to prevent
      incompatible modules being loaded.
      Tested-by: NTony Lindgren <tony@atomide.com>
      Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      dc21af99
  2. 14 1月, 2011 2 次提交
  3. 13 1月, 2011 1 次提交
  4. 07 1月, 2011 2 次提交
  5. 05 1月, 2011 1 次提交
  6. 02 1月, 2011 1 次提交
  7. 24 12月, 2010 1 次提交
    • E
      ARM: 6532/1: Allow machine to specify it's own IRQ handlers at run-time · 52108641
      eric miao 提交于
      Normally different ARM platform has different way to decode the IRQ
      hardware status and demultiplex to the corresponding IRQ handler.
      This is highly optimized by macro irq_handler in entry-armv.S, and
      each machine defines their own macro to decode the IRQ number.
      However, this prevents multiple machine classes to be built into a
      single kernel.
      
      By allowing each machine to specify thier own handler, and making
      function pointer 'handle_arch_irq' to point to it at run time, this
      can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both
      solutions to work.
      
      Comparing with the highly optimized macro of irq_handler, the new
      function must be written with care not to lose too much performance.
      And the IPI stuff on SMP is expected to move to the provided arch
      IRQ handler as well.
      
      The assembly code to invoke handle_arch_irq is optimized by Russell
      King.
      Signed-off-by: NEric Miao <eric.miao@canonical.com>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      52108641
  8. 23 12月, 2010 12 次提交
  9. 21 12月, 2010 2 次提交
  10. 20 12月, 2010 2 次提交
    • D
      ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels. · ed3768a8
      Dave Martin 提交于
        * __fixup_smp_on_up has been modified with support for the
          THUMB2_KERNEL case.  For THUMB2_KERNEL only, fixups are split
          into halfwords in case of misalignment, since we can't rely on
          unaligned accesses working before turning the MMU on.
      
          No attempt is made to optimise the aligned case, since the
          number of fixups is typically small, and it seems best to keep
          the code as simple as possible.
      
        * Add a rotate in the fixup_smp code in order to support
          CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
      
        * Add an assembly-time sanity-check to ALT_UP() to ensure that
          the content really is the right size (4 bytes).
      
          (No check is done for ALT_SMP().  Possibly, this could be fixed
          by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
          ALT_SMP...SMP_UP_B) into two macros.  In the first case,
          ALT_SMP needs to expand to >= 4 bytes, not == 4.)
      
        * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
          to macro limitations) has not been modified: the affected
          instruction (mov) has no 16-bit encoding, so the correct
          instruction size is satisfied in this case.
      
        * A "mode" parameter has been added to smp_dmb:
      
          smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
          smp_dmb     @ uses W() to ensure 4-byte instructions for ALT_SMP()
      
          This avoids assembly failures due to use of W() inside smp_dmb,
          when assembling pure-ARM code in the vectors page.
      
          There might be a better way to achieve this.
      
        * Kconfig: make SMP_ON_UP depend on
          (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
          supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
          currently assumes little-endian order.)
      
      Tested using a single generic realview kernel on:
      	ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
      	ARM RealView PBX-A9 (SMP)
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ed3768a8
    • H
      ARM: pxa: add iwmmx support for PJ4 · ef6c8445
      Haojian Zhuang 提交于
      iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions
      of accessing CP0 and CP1 is changed in PJ4. Append more files to support
      iwmmxt in PJ4 core.
      Signed-off-by: NZhou Zhu <zzhu3@marvell.com>
      Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com>
      Acked-by: NNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
      ef6c8445
  11. 15 12月, 2010 2 次提交
  12. 14 12月, 2010 1 次提交
  13. 09 12月, 2010 1 次提交
  14. 06 12月, 2010 1 次提交
  15. 05 12月, 2010 1 次提交
  16. 04 12月, 2010 1 次提交
  17. 30 11月, 2010 2 次提交
  18. 26 11月, 2010 1 次提交
  19. 23 11月, 2010 2 次提交
  20. 20 11月, 2010 1 次提交
    • R
      ARM: ftrace: enable function graph tracer · 0e341af8
      Rabin Vincent 提交于
      Add the options to enable the function graph tracer on ARM.  Function
      graph tracer support requires frame pointers, so exclude Thumb-2 and
      also make sure FRAME_POINTER gets enabled when FUNCTION_GRAPH_TRACER is
      used, since FUNCTION_TRACER doesn't "select FRAME_POINTER" when
      ARM_UNWIND is used.  Therefore, with GCC 4.4.0+, you get plain function
      tracing without frame pointers, but you'll need them if you want
      function graph tracing.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRabin Vincent <rabin@rab.in>
      0e341af8
  21. 16 11月, 2010 1 次提交
    • P
      ARM: mach-shmobile: Tidy up the Kconfig bits. · 6d72ad35
      Paul Mundt 提交于
      Presently each one of the CPUs manually selects the same feature set, and
      there's a reasonable expectation that none of these will change for
      future CPUs in the SH-Mobile / R-Mobile family, so we move those over to
      the top-level ARCH_SHMOBILE.
      
      While we're at it, all of the CPUs support optional GPIOs via the PFC,
      do not have I/O ports, and expect sparse IRQ, so we bring the
      configuration in line across the board.
      
      This more or less brings the ARM-based parts in sync with their SH
      counterparts.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      6d72ad35
  22. 13 11月, 2010 1 次提交