1. 02 6月, 2015 1 次提交
    • R
      ARM: redo TTBR setup code for LPAE · b2c3e38a
      Russell King 提交于
      Re-engineer the LPAE TTBR setup code.  Rather than passing some shifted
      address in order to fit in a CPU register, pass either a full physical
      address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).
      
      This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
      cpu_set_ttbr() in the secondary CPU startup code path (which was there
      to re-set TTBR1 to the appropriate high physical address space on
      Keystone2.)
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b2c3e38a
  2. 28 3月, 2015 1 次提交
  3. 10 2月, 2015 1 次提交
  4. 21 1月, 2015 1 次提交
  5. 18 7月, 2014 1 次提交
    • R
      ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+ · 6ebbf2ce
      Russell King 提交于
      ARMv6 and greater introduced a new instruction ("bx") which can be used
      to return from function calls.  Recent CPUs perform better when the
      "bx lr" instruction is used rather than the "mov pc, lr" instruction,
      and this sequence is strongly recommended to be used by the ARM
      architecture manual (section A.4.1.1).
      
      We provide a new macro "ret" with all its variants for the condition
      code which will resolve to the appropriate instruction.
      
      Rather than doing this piecemeal, and miss some instances, change all
      the "mov pc" instances to use the new macro, with the exception of
      the "movs" instruction and the kprobes code.  This allows us to detect
      the "mov pc, lr" case and fix it up - and also gives us the possibility
      of deploying this for other registers depending on the CPU selection.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
      Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
      Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
      Tested-by: NShawn Guo <shawn.guo@freescale.com>
      Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
      Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
      Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
      Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
      Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
      Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
      Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
      Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ebbf2ce
  6. 26 5月, 2014 1 次提交
  7. 23 4月, 2014 1 次提交
  8. 04 4月, 2014 1 次提交
    • R
      ARM: Better virt_to_page() handling · e26a9e00
      Russell King 提交于
      virt_to_page() is incredibly inefficient when virt-to-phys patching is
      enabled.  This is because we end up with this calculation:
      
        page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]
      
      in assembly.  The asm virt_to_phys() is equivalent this this operation:
      
        addr - PAGE_OFFSET + __pv_phys_offset
      
      and we can see that because this is assembly, the compiler has no chance
      to optimise some of that away.  This should reduce down to:
      
        page = &mem_map[(addr - PAGE_OFFSET) >> 12]
      
      for the common cases.  Permit the compiler to make this optimisation by
      giving it more of the information it needs - do this by providing a
      virt_to_pfn() macro.
      
      Another issue which makes this more complex is that __pv_phys_offset is
      a 64-bit type on all platforms.  This is needlessly wasteful - if we
      store the physical offset as a PFN, we can save a lot of work having
      to deal with 64-bit values, which sometimes ends up producing incredibly
      horrid code:
      
           a4c:       e3009000        movw    r9, #0
                              a4c: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a50:       e3409000        movt    r9, #0          ; r9 = &__pv_phys_offset
                              a50: R_ARM_MOVT_ABS     __pv_phys_offset
           a54:       e3002000        movw    r2, #0
                              a54: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a58:       e3402000        movt    r2, #0          ; r2 = &__pv_phys_offset
                              a58: R_ARM_MOVT_ABS     __pv_phys_offset
           a5c:       e5999004        ldr     r9, [r9, #4]    ; r9 = high word of __pv_phys_offset
           a60:       e3001000        movw    r1, #0
                              a60: R_ARM_MOVW_ABS_NC  mem_map
           a64:       e592c000        ldr     ip, [r2]        ; ip = low word of __pv_phys_offset
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e26a9e00
  9. 21 2月, 2014 1 次提交
    • T
      ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU · b3634575
      Thomas Petazzoni 提交于
      Currently, when the kernel is configured with LPAE support, but the
      CPU doesn't support it, the error message is fairly cryptic:
      
        Error: unrecognized/unsupported processor variant (0x561f5811).
      
      This messages is normally shown when there is an issue when comparing
      the processor ID (CP15 0, c0, c0) with the values/masks described in
      proc-v7.S. However, the same message is displayed when LPAE support is
      enabled in the kernel configuration, but not available in the CPU,
      after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error
      message is highly misleading.
      
      This commit improves this by showing a different error message when
      this situation occurs:
      
        Error: Kernel with LPAE support, but CPU does not support LPAE.
      Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b3634575
  10. 28 1月, 2014 1 次提交
  11. 14 12月, 2013 1 次提交
    • R
      ARM: fix asm/memory.h build error · b713aa0b
      Russell King 提交于
      Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
      not defined:
      
      In file included from arch/arm/include/asm/page.h:163:0,
                       from include/linux/mm_types.h:16,
                       from include/linux/sched.h:24,
                       from arch/arm/kernel/asm-offsets.c:13:
      arch/arm/include/asm/memory.h: In function '__virt_to_phys':
      arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
      arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
      arch/arm/include/asm/memory.h: In function '__phys_to_virt':
      arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)
      
      Fixes: ca5a45c0 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
      Tested-By: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b713aa0b
  12. 14 11月, 2013 2 次提交
  13. 29 10月, 2013 1 次提交
  14. 20 10月, 2013 2 次提交
  15. 11 10月, 2013 1 次提交
    • S
      ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses · f52bb722
      Sricharan R 提交于
      The current phys_to_virt patching mechanism works only for 32 bit
      physical addresses and this patch extends the idea for 64bit physical
      addresses.
      
      The 64bit v2p patching mechanism patches the higher 8 bits of physical
      address with a constant using 'mov' instruction and lower 32bits are patched
      using 'add'. While this is correct, in those platforms where the lowmem addressable
      physical memory spawns across 4GB boundary, a carry bit can be produced as a
      result of addition of lower 32bits. This has to be taken in to account and added
      in to the upper. The patched __pv_offset and va are added in lower 32bits, where
      __pv_offset can be in two's complement form when PA_START < VA_START and that can
      result in a false carry bit.
      
      e.g
          1) PA = 0x80000000; VA = 0xC0000000
             __pv_offset = PA - VA = 0xC0000000 (2's complement)
      
          2) PA = 0x2 80000000; VA = 0xC000000
             __pv_offset = PA - VA = 0x1 C0000000
      
      So adding __pv_offset + VA should never result in a true overflow for (1).
      So in order to differentiate between a true carry, a __pv_offset is extended
      to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
      2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
      for the same reason. Since mov, add, sub instruction are to patched
      with different constants inside the same stub, the rotation field
      of the opcode is using to differentiate between them.
      
      So the above examples for v2p translation becomes for VA=0xC0000000,
          1) PA[63:32] = 0xffffffff
             PA[31:0] = VA + 0xC0000000 --> results in a carry
             PA[63:32] = PA[63:32] + carry
      
             PA[63:0] = 0x0 80000000
      
          2) PA[63:32] = 0x1
             PA[31:0] = VA + 0xC0000000 --> results in a carry
             PA[63:32] = PA[63:32] + carry
      
             PA[63:0] = 0x2 80000000
      
      The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
      part of the review of first and second versions of the subject patch.
      
      There is no corresponding change on the phys_to_virt() side, because
      computations on the upper 32-bits would be discarded anyway.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NSricharan R <r.sricharan@ti.com>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      f52bb722
  16. 03 10月, 2013 1 次提交
    • S
      ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices · bc41b872
      Santosh Shilimkar 提交于
      The generic code is well equipped to differentiate between
      SMP and UP configurations.However, there are some devices which
      use Cortex-A9 MP core IP with 1 CPU as configuration. To let
      these SOCs to co-exist in a CONFIG_SMP=y build by leveraging
      the SMP_ON_UP support, we need to additionally check the
      number the cores in Cortex-A9 MPCore configuration. Without
      such a check in place, the startup code tries to execute
      ALT_SMP() set of instructions which lead to CPU faults.
      
      The issue was spotted on TI's Aegis device and this patch
      makes now the device work with omap2plus_defconfig which
      enables SMP by default. The change is kept limited to only
      Cortex-A9 MPCore detection code.
      
      Note that if any future SoC *does* use 0x0 as the PERIPH_BASE, then
      the SCU address check code needs to be #ifdef'd for for the Aegis
      platform.
      Acked-by: NSricharan R <r.sricharan@ti.com>
      Signed-off-by: NVaibhav Bedia <vaibhav.bedia@ti.com>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bc41b872
  17. 01 8月, 2013 1 次提交
  18. 15 7月, 2013 1 次提交
    • P
      arm: delete __cpuinit/__CPUINIT usage from all ARM users · 8bd26e3a
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      and are flagged as __cpuinit  -- so if we remove the __cpuinit from
      the arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      related content into no-ops as early as possible, since that will get
      rid of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the ARM uses of the __cpuinit macros from C code,
      and all __CPUINIT from assembly code.  It also had two ".previous"
      section statements that were paired off against __CPUINIT
      (aka .section ".cpuinit.text") that also get removed here.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      8bd26e3a
  19. 30 5月, 2013 1 次提交
  20. 03 4月, 2013 1 次提交
  21. 04 3月, 2013 1 次提交
  22. 17 1月, 2013 1 次提交
  23. 11 1月, 2013 1 次提交
  24. 19 9月, 2012 1 次提交
    • D
      ARM: virt: allow the kernel to be entered in HYP mode · 80c59daf
      Dave Martin 提交于
      This patch does two things:
      
        * Ensure that asynchronous aborts are masked at kernel entry.
          The bootloader should be masking these anyway, but this reduces
          the damage window just in case it doesn't.
      
        * Enter svc mode via exception return to ensure that CPU state is
          properly serialised.  This does not matter when switching from
          an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
          parlance), but it potentially does matter when switching from a
          another privileged mode such as hyp mode.
      
      This should allow the kernel to boot safely either from svc mode or
      hyp mode, even if no support for use of the ARM Virtualization
      Extensions is built into the kernel.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      80c59daf
  25. 14 9月, 2012 1 次提交
  26. 10 7月, 2012 1 次提交
  27. 04 5月, 2012 1 次提交
  28. 29 3月, 2012 1 次提交
  29. 24 3月, 2012 2 次提交
  30. 13 1月, 2012 1 次提交
  31. 08 12月, 2011 2 次提交
  32. 06 12月, 2011 2 次提交
    • W
      ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting · 4e8ee7de
      Will Deacon 提交于
      The ARM SMP booting code allocates a temporary set of page tables
      containing an identity mapping of the kernel image and provides this
      to secondary CPUs for initial booting.
      
      In reality, we only need to include the __turn_mmu_on function in the
      identity mapping since the rest of the kernel is executing from virtual
      addresses after this point.
      
      This patch adds __turn_mmu_on to the .idmap.text section, allowing the
      SMP booting code to use the idmap_pgd directly and not have to populate
      its own set of page table.
      
      As a result of this patch, we can make the identity_mapping_add function
      static (since it is only used within mm/idmap.c) and also remove the
      identity_mapping_del function. The identity map population is moved to
      an early initcall so that it is setup in time for secondary CPU bringup.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4e8ee7de
    • W
      ARM: head.S: only include __turn_mmu_on in the initial identity mapping · 72662e01
      Will Deacon 提交于
      __create_page_tables identity maps the region of memory from
      __enable_mmu to the end of __turn_mmu_on.
      
      In preparation for including __turn_mmu_on in the .idmap.text section,
      this patch modifies the identity mapping so that it only includes the
      __turn_mmu_on code.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      72662e01
  33. 09 11月, 2011 1 次提交
  34. 26 9月, 2011 2 次提交