- 03 5月, 2016 1 次提交
-
-
由 Russell King 提交于
For kexec, we need more functionality from the IDMAP system. We need to be able to convert physical addresses to their identity mappped versions as well as virtual addresses. Convert the existing arch_virt_to_idmap() to deal with physical addresses instead. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 2月, 2016 1 次提交
-
-
由 Russell King 提交于
Make virt_to_idmap() return an unsigned long rather than phys_addr_t. Returning phys_addr_t here makes no sense, because the definition of virt_to_idmap() is that it shall return a physical address which maps identically with the virtual address. Since virtual addresses are limited to 32-bit, identity mapped physical addresses are as well. Almost all users already had an implicit narrowing cast to unsigned long so let's make this official and part of this interface. Tested-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 12月, 2015 1 次提交
-
-
由 Arnd Bergmann 提交于
In a multiplatform configuration, we may end up building a kernel for both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a build error from the coprocessor instructions. Since we know this code will only have to run on an actual xscale processor, we can simply build the entire file for ARMv5TE. Related to this, we need to handle the iWMMXT initialization sequence differently during boot, to ensure we don't try to touch xscale specific registers on other CPUs from the xscale_cp0_init initcall. cpu_is_xscale() used to be hardcoded to '1' in any configuration that enables any XScale-compatible core, but this breaks once we can have a combined kernel with MMP1 and something else. In this patch, I replace the existing cpu_is_xscale() macro with a new cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and mohawk, which makes the behavior more deterministic. The two existing users of cpu_is_xscale() are modified accordingly, but slightly change behavior for kernels that enable CPU_MOHAWK without also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave PMD_BIT4 in the page tables untouched, now they clear it as we've always done for kernels that enable both MOHAWK and the support for the older CPU types. Since the previous behavior was inconsistent, I assume it was unintentional. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 26 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 8月, 2014 1 次提交
-
-
由 Russell King 提交于
Add a note about the usage of the identity mapping; we do not support accesses outside of the identity map region and kernel image while a CPU is using the identity map. This is because the identity mapping may overwrite vmalloc space, IO mappings, the vectors pages, etc. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2014 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
On LPAE, each level 1 (pgd) page table entry maps 1GiB, and the level 2 (pmd) entries map 2MiB. When the identity mapping is created on LPAE, the pgd pointers are copied from the swapper_pg_dir. If we find that we need to modify the contents of a pmd, we allocate a new empty pmd table and insert it into the appropriate 1GB slot, before then filling it with the identity mapping. However, if the 1GB slot covers the kernel lowmem mappings, we obliterate those mappings. When replacing a PMD, first copy the old PMD contents to the new PMD, so that we preserve the existing mappings, particularly the mappings of the kernel itself. [rewrote commit message and added code comment -- rmk] Fixes: ae2de101 ("ARM: LPAE: Add identity mapping support for the 3-level page table format") Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 10月, 2013 2 次提交
-
-
由 Santosh Shilimkar 提交于
Commit 9e9a367c {ARM: Section based HYP idmap} moved the address conversion inside identity_mapping_add() without respective print which carries useful idmap information. Move the print as well inside identity_mapping_add() to fix the same. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Santosh Shilimkar 提交于
On some PAE systems (e.g. TI Keystone), memory is above the 32-bit addressable limit, and the interconnect provides an aliased view of parts of physical memory in the 32-bit addressable space. This alias is strictly for boot time usage, and is not otherwise usable because of coherency limitations. On such systems, the idmap mechanism needs to take this aliased mapping into account. This patch introduces virt_to_idmap() and a arch function pointer which can be populated by platform which needs it. Also populate necessary idmap spots with now available virt_to_idmap(). Avoided #ifdef approach to be compatible with multi-platform builds. Most architecture won't touch it and in that case virt_to_idmap() fall-back to existing virt_to_phys() macro. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 29 4月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
After the HYP page table rework, it is pretty easy to let the KVM code provide its own idmap, rather than expecting the kernel to provide it. It takes actually less code to do so. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
- 04 3月, 2013 1 次提交
-
-
由 Will Deacon 提交于
The ARM ARM requires branch predictor maintenance if, for a given ASID, the instructions at a specific virtual address appear to change. From the kernel's point of view, that means: - Changing the kernel's view of memory (e.g. switching to the identity map) - ASID rollover (since ASIDs will be re-allocated to new tasks) This patch adds explicit branch predictor maintenance when either of the two conditions above are met. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 1月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
Add a method (hyp_idmap_setup) to populate a hyp pgd with an identity mapping of the code contained in the .hyp.idmap.text section. Offer a method to drop this identity mapping through hyp_idmap_teardown. Make all the above depend on CONFIG_ARM_VIRT_EXT and CONFIG_ARM_LPAE. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
- 13 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Flushing the cache is needed for the hardware to see the idmap table and therefore can be done at init time. On ARMv7 it is not necessary to flush L2 so flush_cache_louis() is used here instead. There is no point flushing the cache in setup_mm_for_reboot() as the caller should, and already is, taking care of this. If switching the memory map requires a cache flush, then cpu_switch_mm() already includes that operation. What is not done by cpu_switch_mm() on ASID capable CPUs is TLB flushing as the whole point of the ASID is to tag the TLBs and avoid flushing them on a context switch. Since we don't have a clean ASID for the identity mapping, we need to flush the TLB explicitly in that case. Otherwise this is already performed by cpu_switch_mm(). Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for ARM. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Russell King <linux@arm.linux.org.uk> cc: linux-arm-kernel@lists.infradead.org
-
- 08 12月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
With LPAE, the pgd is a separate page table with entries pointing to the pmd. The identity_mapping_add() function needs to ensure that the pgd is populated before populating the pmd level. The do..while blocks now loop over the pmd in order to have the same implementation for the two page table formats. The pmd_addr_end() definition has been removed and the generic one used instead. The pmd clean-up is done in the pgd_free() function. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2011 3 次提交
-
-
由 Will Deacon 提交于
The ARM SMP booting code allocates a temporary set of page tables containing an identity mapping of the kernel image and provides this to secondary CPUs for initial booting. In reality, we only need to include the __turn_mmu_on function in the identity mapping since the rest of the kernel is executing from virtual addresses after this point. This patch adds __turn_mmu_on to the .idmap.text section, allowing the SMP booting code to use the idmap_pgd directly and not have to populate its own set of page table. As a result of this patch, we can make the identity_mapping_add function static (since it is only used within mm/idmap.c) and also remove the identity_mapping_del function. The identity map population is moved to an early initcall so that it is setup in time for secondary CPU bringup. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
For soft-rebooting a system, it is necessary to map the MMU-off code with an identity mapping so that execution can continue safely once the MMU has been switched off. Currently, switch_mm_for_reboot takes out a 1:1 mapping from 0x0 to TASK_SIZE during reboot in the hope that the reset code lives at a physical address corresponding to a userspace virtual address. This patch modifies the code so that we switch to the idmap_pgd tables, which contain a 1:1 mapping of the cpu_reset code. This has the advantage of only remapping the code that we need and also means we don't need to worry about allocating a pgd from an atomic context in the case that the physical address of the cpu_reset code aliases with the virtual space used by the kernel. Acked-by: NDave Martin <dave.martin@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When disabling and re-enabling the MMU, it is necessary to take out an identity mapping for the code that manipulates the SCTLR in order to avoid it disappearing from under our feet. This is useful when soft rebooting and returning from CPU suspend. This patch allocates a set of page tables during boot and populates them with an identity mapping for the .idmap.text section. This means that users of the identity map do not need to manage their own pgd and can instead annotate their functions with __idmap or, in the case of assembly code, place them in the correct section. Acked-by: NDave Martin <dave.martin@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NLorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 11月, 2011 1 次提交
-
-
由 Russell King 提交于
setup_mm_for_reboot() doesn't make use of its argument, so remove it. Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 12月, 2010 2 次提交
-
-
由 Russell King 提交于
Remove some knowledge of our 2-level page table layout from the identity mapping code - we assume that a step size of PGDIR_SIZE will allow us to step over all entries. While this is true today, it won't be true in the near future. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-