- 29 3月, 2012 1 次提交
-
-
由 Russell King 提交于
Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: NStephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 13 1月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds a check for the presence of the LPAE feature during the CPU initialisation. If not present, it reports an error when CONFIG_DEBUG_LL is enabled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 12月, 2011 2 次提交
-
-
由 Catalin Marinas 提交于
This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2011 2 次提交
-
-
由 Will Deacon 提交于
The ARM SMP booting code allocates a temporary set of page tables containing an identity mapping of the kernel image and provides this to secondary CPUs for initial booting. In reality, we only need to include the __turn_mmu_on function in the identity mapping since the rest of the kernel is executing from virtual addresses after this point. This patch adds __turn_mmu_on to the .idmap.text section, allowing the SMP booting code to use the idmap_pgd directly and not have to populate its own set of page table. As a result of this patch, we can make the identity_mapping_add function static (since it is only used within mm/idmap.c) and also remove the identity_mapping_del function. The identity map population is moved to an early initcall so that it is setup in time for secondary CPU bringup. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
__create_page_tables identity maps the region of memory from __enable_mmu to the end of __turn_mmu_on. In preparation for including __turn_mmu_on in the .idmap.text section, this patch modifies the identity mapping so that it only includes the __turn_mmu_on code. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 11月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
Recent gcc versions generate unaligned accesses by default on ARMv6 and later processors. This patch ensures that the SCTLR.A bit is always cleared on such processors to avoid kernel traping before alignment_init() is called. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJohn Linn <John.Linn@xilinx.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2011 2 次提交
-
-
由 Nicolas Pitre 提交于
When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular machine class, the machine specific memory.h include file is no longer used and can be removed. In that case the equivalent information can be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT or by specifying the physical memory address at kernel configuration time. If/when all instances of mach/memory.h are removed then this symbol could be removed. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Nicolas Pitre 提交于
Some platforms (like OMAP not to name it) are doing rather complicated hacks just to determine the base UART address to use. Let's give their addruart macro some slack by providing an extra work register which will allow for much needed cleanups. This is basically a no-op as this commit is only adding the extra argument to the macro but no one is using it yet. Signed-off-by: Nnicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: NKevin Hilman <khilman@ti.com>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have the same value (21). This patch converts the PGDIR_* uses in the kernel to the PMD_* equivalent so that LPAE builds can reuse the same code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 8月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
This code can be removed now that MSM targets no longer need the 16-bit offsets for P2V. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 7月, 2011 1 次提交
-
-
由 Dave Martin 提交于
Currently, the documented kernel entry requirements are not explicit about whether the kernel should be entered in ARM or Thumb, leading to an ambiguitity about how to enter Thumb-2 kernels. As a result, the kernel is reliant on the zImage decompressor to enter the kernel proper in the correct instruction set state. This patch changes the boot entry protocol for head.S and Image to be the same as for zImage: in all cases, the kernel is now entered in ARM. Documentation/arm/Booting is updated to reflect this new policy. A different rule will be needed for Cortex-M class CPUs as and when support for those lands in mainline, since these CPUs don't support the ARM instruction set at all: a note is added to the effect that the kernel must be entered in Thumb on such systems. Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel mappings can be used exclusively on v6 and v7 cores where they are needed. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 5月, 2011 1 次提交
-
-
由 Grant Likely 提交于
The dtb is passed to the kernel via register r2, which is the same method that is used to pass an atags pointer. This patch modifies __vet_atags to not clear r2 when it encounters a dtb image. v2: fixed bugs pointed out by Nicolas Pitre Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 11 3月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
Adding Thumb2 support to the runtime patching of the virt_to_phys and phys_to_virt opcodes. Tested both the 8-bit and the 16-bit fixups, using different placements in memory to exercize all code paths. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 2月, 2011 4 次提交
-
-
由 Rob Herring 提交于
If ATAGs or DTB pointer is not within first 1MB of RAM, then the boot params will not be mapped early enough, so map the 1MB region that r2 points to. Only map the first 1MB when r2 is 0. Some assembly improvements from Nicolas Pitre. Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
MSM's memory is aligned to 2MB, which is more than we can do with our existing method as we're limited to the upper 8 bits. Extend this by using two instructions to 16 bits, automatically selected when MSM is enabled. Acked-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
head.S makes use of PHYS_OFFSET. When it becomes a variable, the assembler won't understand this. Compute PHYS_OFFSET by the following method. This code is linked at its virtual address, but run at before the MMU is enabled, so at his physical address. 1: .long . .long PAGE_OFFSET adr r0, 1b @ r0 = physical ',' ldmia r0, {r1, r2} @ r1 = virtual '.', r2 = PAGE_OFFSET sub r1, r0, r1 @ r1 = physical-virtual add r2, r2, r1 @ r2 = PAGE_OFFSET + physical-virtual @ := PHYS_OFFSET. Switch XIP users of PHYS_OFFSET to use PLAT_PHYS_OFFSET - we can't use this method for XIP kernels as the code doesn't execute in RAM. Tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Since the debug macros no longer depend on the machine type information, the machine type lookup can be deferred to setup_arch() in setup.c which simplifies the code somewhat. We also move the __error_a functionality into setup.c for displaying a message when a bad machine ID is passed to the kernel via the LL debug code. We also log this into the kernel ring buffer which makes it possible to retrieve the message via a debugger. Original idea from Grant Likely. Acked-by: NGrant Likely <grant.likely@secretlab.ca> Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 2月, 2011 1 次提交
-
-
由 Russell King 提交于
With certain configurations, we inline the unlock functions in modules, which results in SMP alternatives being created in modules. We need to fix those up when loading a module to prevent undefined instruction faults. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Allow non-ARM SMP processors to use the SMP_ON_UP feature. CPUs supporting SMP must have the new CPU ID format, so check for this first. Then check for ARM11MPCore, which fails the MPIDR check. Lastly check the MPIDR reports multiprocessing extensions and that the CPU is part of a multiprocessing system. Cc: <stable@kernel.org> Reported-and-Tested-by: NStephen Boyd <sboyd@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2010 2 次提交
-
-
由 Dave Martin 提交于
* __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to understand which registers can be modified. Also document which registers hold values which must be preserved. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 11月, 2010 2 次提交
-
-
由 Dave Martin 提交于
The 32-bit conditional branches in Thumb-2 have a shorter range (+/-512K) than their ARM counterparts (+/-32MB). The linker does not currently generate trampolines to extend the range of these Thumb-2 conditional branches, resulting in link errors when vmlinux is sufficiently large, e.g.: head.o:(.text+0x464): relocation truncated to fit: R_ARM_THM_JUMP19 This patch forces the longer-range, unconditional branch encoding by use of an explicit IT instruction. The resulting branches are triggered on the same conditions as before. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dave Martin 提交于
Directives such as .long and .word do not magically cause the assembler location counter to become aligned in gas. As a result, using these directives in code sections can result in misaligned data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL). This is a Bad Thing, since the ABI permits the compiler to assume that fundamental types of word size or above are word- aligned when accessing them from C. If the data is not really word-aligned, this can cause impaired performance and stray alignment faults in some circumstances. In general, the following rules should be applied when using data word declaration directives inside code sections: * .quad and .double: .align 3 * .long, .word, .single, .float: .align (or .align 2) * .short: No explicit alignment required, since Thumb-2 instructions are always 2 or 4 bytes in size. immediately after an instruction. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2010 1 次提交
-
-
由 Jeremy Kerr 提交于
Since we can get both physical and virtual addresses from the addruart macro, we can use this to establish the debug mappings. In the case of CONFIG_DEBUG_ICEDCC, we don't need any mappings, but may still need to setup r7 correctly. Incorporating ASM changes from Nicolas Pitre <npitre@fluxnic.net>. Signed-off-by: NJeremy Kerr <jeremy.kerr@canonical.com> Tested-by: NKevin Hilman <khilman@deeprootsystems.com>
-
- 08 10月, 2010 4 次提交
-
-
由 Russell King 提交于
Add some additional documentation on register usage in __enable_mmu to help complete the overall picture. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move these two functions, both of which are required for secondary CPU booting, into the cpuinit section. Ensure bad processors call __error_p for better diagnostics, rather than just __error. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
__enable_mmu is required to be executed in an identity mapped region to ensure that variances in CPUs do not cause a crash. We currently achieve this by assuming that it will be co-located with __create_page_tables. With hotplug CPU support, this assumption becomes invalid. Implement a better solution which ensures that it will be appropriately mapped no matter where it is placed. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This allows us to relocate __mmap_switched and associated data away from the head section. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 10月, 2010 1 次提交
-
-
由 Russell King 提交于
UP systems do not implement all the instructions that SMP systems have, so in order to boot a SMP kernel on a UP system, we need to rewrite parts of the kernel. Do this using an 'alternatives' scheme, where the kernel code and data is modified prior to initialization to replace the SMP instructions, thereby rendering the problematical code ineffectual. We use the linker to generate a list of 32-bit word locations and their replacement values, and run through these replacements when we detect a UP system. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 11月, 2009 1 次提交
-
-
由 Tim Abbott 提交于
This has the consequence of changing the section name used for head code from ".text.head" to ".head.text". Since this commit changes all users in the architecture, this change should be harmless. The .text.head output section is eliminated and the head text code is included at the start of the .init output section. Signed-off-by: NTim Abbott <tabbott@ksplice.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 7月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
This patch implements the ARM/Thumb-2 unified kernel start-up and exception handling code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 9月, 2008 1 次提交
-
-
由 Catalin Marinas 提交于
This declaration specifies the "function" type and size for various assembly functions, mainly needed for generating the correct branch instructions in Thumb-2. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 3月, 2008 1 次提交
-
-
由 Greg Ungerer 提交于
Move the definitions of ATAG_CORE and ATAG_CORE_SIZE in head.S to head-common.S. There is no use of these in head.S itself, but they are used in head-common.S. When building for the !CONFIG_MMU case these were not defined when compiling head-nommu.S (which includes head-common.S). Signed-off-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 7月, 2007 1 次提交
-
-
由 Bill Gatliff 提交于
Examines the ATAGS pointer (r2) at boot, and interprets a nonzero value as a reference to an ATAGS structure. A suitable ATAGS structure replaces the kernel's command line. Signed-off-by: NBill Gatliff <bgat@billgatliff.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 5月, 2007 1 次提交
-
-
由 Russell King 提交于
Commit 86c0baf1 highlighted that we may end up with the head text placed elsewhere in the kernel image. Introduce a new .text.head section to contain the initial kernel startup code, and always place this section at the beginning of the kernel image. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 5月, 2007 1 次提交
-
-
由 Nicolas Pitre 提交于
Let's surround constructs like: orr r3, r3, #(KERNEL_RAM_PADDR & 0x00f00000) between .if .endif since (KERNEL_RAM_PADDR & 0x00f00000) is 0 in 99% of all cases. Also let's mask PHYS_OFFSET with 0x00f00000 instead of 0x00e00000. Section mappings are really 1MB not 2MB and the 2MB groupping is a higher level issue already much better enforced with #if (PHYS_OFFSET & 0x001fffff) #error "PHYS_OFFSET must be at an even 2MiB boundary!" #endif at the top of the file. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 2月, 2007 1 次提交
-
-
由 Nicolas Pitre 提交于
aware Since TEXT_OFFSET is meant to determine RAM location for kernel use, itshould affect .data and .bss initial mapping in the XIP case. Otherwise a XIP kernel would crash if TEXT_OFFSET gets somewhat larger than 2MB. Corresponding code is also moved up a bit to be near the similar .text mapping code making the whole a bit more straight forward to understand. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-