1. 03 5月, 2016 1 次提交
  2. 08 2月, 2016 1 次提交
  3. 02 12月, 2015 1 次提交
    • A
      ARM: make xscale iwmmxt code multiplatform aware · d33c43ac
      Arnd Bergmann 提交于
      In a multiplatform configuration, we may end up building a kernel for
      both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the
      xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a
      build error from the coprocessor instructions.
      
      Since we know this code will only have to run on an actual xscale
      processor, we can simply build the entire file for ARMv5TE.
      
      Related to this, we need to handle the iWMMXT initialization sequence
      differently during boot, to ensure we don't try to touch xscale
      specific registers on other CPUs from the xscale_cp0_init initcall.
      cpu_is_xscale() used to be hardcoded to '1' in any configuration that
      enables any XScale-compatible core, but this breaks once we can have a
      combined kernel with MMP1 and something else.
      
      In this patch, I replace the existing cpu_is_xscale() macro with a new
      cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and
      mohawk, which makes the behavior more deterministic.
      
      The two existing users of cpu_is_xscale() are modified accordingly,
      but slightly change behavior for kernels that enable CPU_MOHAWK without
      also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave
      PMD_BIT4 in the page tables untouched, now they clear it as we've always
      done for kernels that enable both MOHAWK and the support for the older
      CPU types.
      
      Since the previous behavior was inconsistent, I assume it was
      unintentional.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      d33c43ac
  4. 26 9月, 2014 1 次提交
  5. 02 8月, 2014 1 次提交
  6. 29 7月, 2014 1 次提交
    • K
      ARM: 8115/1: LPAE: reduce damage caused by idmap to virtual memory layout · 811a2407
      Konstantin Khlebnikov 提交于
      On LPAE, each level 1 (pgd) page table entry maps 1GiB, and the level 2
      (pmd) entries map 2MiB.
      
      When the identity mapping is created on LPAE, the pgd pointers are copied
      from the swapper_pg_dir.  If we find that we need to modify the contents
      of a pmd, we allocate a new empty pmd table and insert it into the
      appropriate 1GB slot, before then filling it with the identity mapping.
      
      However, if the 1GB slot covers the kernel lowmem mappings, we obliterate
      those mappings.
      
      When replacing a PMD, first copy the old PMD contents to the new PMD, so
      that we preserve the existing mappings, particularly the mappings of the
      kernel itself.
      
      [rewrote commit message and added code comment -- rmk]
      
      Fixes: ae2de101 ("ARM: LPAE: Add identity mapping support for the 3-level page table format")
      Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      811a2407
  7. 11 10月, 2013 2 次提交
    • S
      ARM: mm: Move the idmap print to appropriate place in the code · c1a5f4f6
      Santosh Shilimkar 提交于
      Commit 9e9a367c {ARM: Section based HYP idmap} moved
      the address conversion inside identity_mapping_add() without
      respective print which carries useful idmap information.
      
      Move the print as well inside identity_mapping_add() to
      fix the same.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      c1a5f4f6
    • S
      ARM: mm: Introduce virt_to_idmap() with an arch hook · 4dc9a817
      Santosh Shilimkar 提交于
      On some PAE systems (e.g. TI Keystone), memory is above the
      32-bit addressable limit, and the interconnect provides an
      aliased view of parts of physical memory in the 32-bit addressable
      space.  This alias is strictly for boot time usage, and is not
      otherwise usable because of coherency limitations. On such systems,
      the idmap mechanism needs to take this aliased mapping into account.
      
      This patch introduces virt_to_idmap() and a arch function pointer which
      can be populated by platform which needs it. Also populate necessary
      idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
      to be compatible with multi-platform builds.
      
      Most architecture won't touch it and in that case virt_to_idmap()
      fall-back to existing virt_to_phys() macro.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      4dc9a817
  8. 29 4月, 2013 1 次提交
  9. 04 3月, 2013 1 次提交
  10. 24 1月, 2013 1 次提交
  11. 13 11月, 2012 1 次提交
    • N
      ARM: 7573/1: idmap: use flush_cache_louis() and flush TLBs only when necessary · e4067855
      Nicolas Pitre 提交于
      Flushing the cache is needed for the hardware to see the idmap table
      and therefore can be done at init time.  On ARMv7 it is not necessary to
      flush L2 so flush_cache_louis() is used here instead.
      
      There is no point flushing the cache in setup_mm_for_reboot() as the
      caller should, and already is, taking care of this.  If switching the
      memory map requires a cache flush, then cpu_switch_mm() already includes
      that operation.
      
      What is not done by cpu_switch_mm() on ASID capable CPUs is TLB flushing
      as the whole point of the ASID is to tag the TLBs and avoid flushing them
      on a context switch.  Since we don't have a clean ASID for the identity
      mapping, we need to flush the TLB explicitly in that case.  Otherwise
      this is already performed by cpu_switch_mm().
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e4067855
  12. 29 3月, 2012 1 次提交
  13. 08 12月, 2011 1 次提交
    • C
      ARM: LPAE: Add identity mapping support for the 3-level page table format · ae2de101
      Catalin Marinas 提交于
      With LPAE, the pgd is a separate page table with entries pointing to the
      pmd. The identity_mapping_add() function needs to ensure that the pgd is
      populated before populating the pmd level. The do..while blocks now loop
      over the pmd in order to have the same implementation for the two page
      table formats. The pmd_addr_end() definition has been removed and the
      generic one used instead. The pmd clean-up is done in the pgd_free()
      function.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ae2de101
  14. 06 12月, 2011 3 次提交
    • W
      ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting · 4e8ee7de
      Will Deacon 提交于
      The ARM SMP booting code allocates a temporary set of page tables
      containing an identity mapping of the kernel image and provides this
      to secondary CPUs for initial booting.
      
      In reality, we only need to include the __turn_mmu_on function in the
      identity mapping since the rest of the kernel is executing from virtual
      addresses after this point.
      
      This patch adds __turn_mmu_on to the .idmap.text section, allowing the
      SMP booting code to use the idmap_pgd directly and not have to populate
      its own set of page table.
      
      As a result of this patch, we can make the identity_mapping_add function
      static (since it is only used within mm/idmap.c) and also remove the
      identity_mapping_del function. The identity map population is moved to
      an early initcall so that it is setup in time for secondary CPU bringup.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4e8ee7de
    • W
      ARM: idmap: use idmap_pgd when setting up mm for reboot · 2c8951ab
      Will Deacon 提交于
      For soft-rebooting a system, it is necessary to map the MMU-off code
      with an identity mapping so that execution can continue safely once the
      MMU has been switched off.
      
      Currently, switch_mm_for_reboot takes out a 1:1 mapping from 0x0 to
      TASK_SIZE during reboot in the hope that the reset code lives at a
      physical address corresponding to a userspace virtual address.
      
      This patch modifies the code so that we switch to the idmap_pgd tables,
      which contain a 1:1 mapping of the cpu_reset code. This has the
      advantage of only remapping the code that we need and also means we
      don't need to worry about allocating a pgd from an atomic context in the
      case that the physical address of the cpu_reset code aliases with the
      virtual space used by the kernel.
      Acked-by: NDave Martin <dave.martin@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2c8951ab
    • W
      ARM: idmap: populate identity map pgd at init time using .init.text · 8903826d
      Will Deacon 提交于
      When disabling and re-enabling the MMU, it is necessary to take out an
      identity mapping for the code that manipulates the SCTLR in order to
      avoid it disappearing from under our feet. This is useful when soft
      rebooting and returning from CPU suspend.
      
      This patch allocates a set of page tables during boot and populates them
      with an identity mapping for the .idmap.text section. This means that
      users of the identity map do not need to manage their own pgd and can
      instead annotate their functions with __idmap or, in the case of assembly
      code, place them in the correct section.
      Acked-by: NDave Martin <dave.martin@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NLorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8903826d
  15. 11 11月, 2011 1 次提交
  16. 22 2月, 2011 1 次提交
  17. 22 12月, 2010 2 次提交