- 14 11月, 2013 1 次提交
-
-
由 Victor Kamensky 提交于
Commit "bc41b872 ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices" added read of SCU config register into __fixup_smp function. Such read should be followed by byteswap, if kernel runs in BE mode. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 10月, 2013 1 次提交
-
-
由 Sricharan R 提交于
Commit 'f52bb722' Author: Sricharan R <r.sricharan@ti.com> ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses introduced a __ARMEB__ macro usage in a new place, but missed the second underscore. So correcting it here. Also a explicit .align keyword is needed for the label with .long data-type to be aligned on the 4 byte boundary. Otherwise this can cause problem for thumb2 build. So adding it here. Signed-off-by: NSricharan R <r.sricharan@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2013 2 次提交
-
-
由 Ben Dooks 提交于
If we are booting in LE and compiled for BE8, then add code to set the state to bE8. Since the instruction stream is always LE, we do not need to do anything special to the instruction. Also ensure that the secondary processors are started in the same mode. Note, we do add about 20 bytes to the kernel image, but it seems easier to do this than adding another configuration to change. Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Ben Dooks 提交于
The fixup_pv_table assumes that the instructions are in the same endian configuration as the data, but when the CPU is running in BE8 the instructions stay in little-endian format. Make sure if CONFIG_CPU_ENDIAN_BE8 is set that we do all the alterations to the instructions taking in to account the LDR/STR will be swapping the data endian-ness. Since the code is only modifying a byte, we avoid dual-swapping the data, and just change the bits we clear and ORR in (in the case where the code is not thumb2). For thumb2, we add the necessary rev16 instructions to ensure that the instructions are processed in the correct format, as it was easier than re-writing the code to contain a mask and shift. Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
- 11 10月, 2013 1 次提交
-
-
由 Sricharan R 提交于
The current phys_to_virt patching mechanism works only for 32 bit physical addresses and this patch extends the idea for 64bit physical addresses. The 64bit v2p patching mechanism patches the higher 8 bits of physical address with a constant using 'mov' instruction and lower 32bits are patched using 'add'. While this is correct, in those platforms where the lowmem addressable physical memory spawns across 4GB boundary, a carry bit can be produced as a result of addition of lower 32bits. This has to be taken in to account and added in to the upper. The patched __pv_offset and va are added in lower 32bits, where __pv_offset can be in two's complement form when PA_START < VA_START and that can result in a false carry bit. e.g 1) PA = 0x80000000; VA = 0xC0000000 __pv_offset = PA - VA = 0xC0000000 (2's complement) 2) PA = 0x2 80000000; VA = 0xC000000 __pv_offset = PA - VA = 0x1 C0000000 So adding __pv_offset + VA should never result in a true overflow for (1). So in order to differentiate between a true carry, a __pv_offset is extended to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is 2's complement. So 'mvn #0' is inserted instead of 'mov' while patching for the same reason. Since mov, add, sub instruction are to patched with different constants inside the same stub, the rotation field of the opcode is using to differentiate between them. So the above examples for v2p translation becomes for VA=0xC0000000, 1) PA[63:32] = 0xffffffff PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x0 80000000 2) PA[63:32] = 0x1 PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x2 80000000 The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as part of the review of first and second versions of the subject patch. There is no corresponding change on the phys_to_virt() side, because computations on the upper 32-bits would be discarded anyway. Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSricharan R <r.sricharan@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 03 10月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
The generic code is well equipped to differentiate between SMP and UP configurations.However, there are some devices which use Cortex-A9 MP core IP with 1 CPU as configuration. To let these SOCs to co-exist in a CONFIG_SMP=y build by leveraging the SMP_ON_UP support, we need to additionally check the number the cores in Cortex-A9 MPCore configuration. Without such a check in place, the startup code tries to execute ALT_SMP() set of instructions which lead to CPU faults. The issue was spotted on TI's Aegis device and this patch makes now the device work with omap2plus_defconfig which enables SMP by default. The change is kept limited to only Cortex-A9 MPCore detection code. Note that if any future SoC *does* use 0x0 as the PERIPH_BASE, then the SCU address check code needs to be #ifdef'd for for the Aegis platform. Acked-by: NSricharan R <r.sricharan@ti.com> Signed-off-by: NVaibhav Bedia <vaibhav.bedia@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 8月, 2013 1 次提交
-
-
由 Russell King 提交于
Commit 8bd26e3a (arm: delete __cpuinit/__CPUINIT usage from all ARM users) caused some code to leak into sections which are discarded through the removal of __CPUINIT annotations. Add appropriate .text annotations to bring these back into the kernel text. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 30 5月, 2013 1 次提交
-
-
由 Cyril Chemparathy 提交于
This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 4月, 2013 1 次提交
-
-
由 Paul Bolle 提交于
CONFIG_LPAE doesn't exist: the correct option is CONFIG_ARM_LPAE, so fix up the two typos under arch/arm/. The fix to head.S is slightly scary, but this is just for setting up an early io-mapping for the serial port when running on a big-endian, LPAE system. Since these systems don't exist in the wild (at least, I have no access to one outside of kvmtool, which doesn't provide a serial port suitable for earlyprintk), then we can revisit the code later if it causes any problems. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 3月, 2013 1 次提交
-
-
由 Will Deacon 提交于
The LPAE page table format uses 64-bit descriptors, so we need to take endianness into account when populating the swapper and idmap tables during early initialisation. This patch ensures that we store the two words making up each page table entry in the correct order when running big-endian. Cc: <stable@vger.kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 1月, 2013 1 次提交
-
-
由 Nicolas Pitre 提交于
We currently use a temporary 1MB section aligned to a 1MB boundary for mapping the provided device tree until the final page table is created. However, if the device tree happens to cross that 1MB boundary, the end of it remains unmapped and the kernel crashes when it attempts to access it. Given no restriction on the location of that DTB, it could end up with only a few bytes mapped at the end of a section. Solve this issue by mapping two consecutive sections. Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSascha Hauer <s.hauer@pengutronix.de> Tested-by: NTomasz Figa <t.figa@samsung.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 1月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
Secondary CPUs should use the __hyp_stub_install_secondary entry point, so boot mode inconsistencies can be detected. Cc: <stable@vger.kernel.org> Acked-by: NDave Martin <dave.martin@linaro.org> Reported-by: NIan Molton <ian.molton@collabora.co.uk> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 9月, 2012 1 次提交
-
-
由 Dave Martin 提交于
This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 14 9月, 2012 1 次提交
-
-
由 Rob Herring 提交于
Based on suggestion by Russell King, create a common location for debug macros and select the included debug macro file using config option. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk>
-
- 10 7月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Let's map the initial RAM up to the end of the kernel .bss instead of the strict kernel image area. This simplifies the code as the kernel image only needs to be handled specially in the XIP case. That covers the legacy ATAG location as well. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 5月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
There is just no point mapping up to 512MB for a serial port. Using a single 1MB entry is way sufficient for all users. This will create less interference for the following debugging patch. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 3月, 2012 1 次提交
-
-
由 Russell King 提交于
Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: NStephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 24 3月, 2012 2 次提交
-
-
由 Nicolas Pitre 提交于
This is a very simple method for code running in an emulator, or under the supervision of a debugger, to use I/O facilities on the controlling host. Tested with OpenOCD, and ARM's Fast Models. Details on semihosting can be found in chapter 8 of DUI0203I_rvct_developer_guide.pdf from ARM Ltd. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: NStephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 1月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds a check for the presence of the LPAE feature during the CPU initialisation. If not present, it reports an error when CONFIG_DEBUG_LL is enabled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 12月, 2011 2 次提交
-
-
由 Catalin Marinas 提交于
This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2011 2 次提交
-
-
由 Will Deacon 提交于
The ARM SMP booting code allocates a temporary set of page tables containing an identity mapping of the kernel image and provides this to secondary CPUs for initial booting. In reality, we only need to include the __turn_mmu_on function in the identity mapping since the rest of the kernel is executing from virtual addresses after this point. This patch adds __turn_mmu_on to the .idmap.text section, allowing the SMP booting code to use the idmap_pgd directly and not have to populate its own set of page table. As a result of this patch, we can make the identity_mapping_add function static (since it is only used within mm/idmap.c) and also remove the identity_mapping_del function. The identity map population is moved to an early initcall so that it is setup in time for secondary CPU bringup. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
__create_page_tables identity maps the region of memory from __enable_mmu to the end of __turn_mmu_on. In preparation for including __turn_mmu_on in the .idmap.text section, this patch modifies the identity mapping so that it only includes the __turn_mmu_on code. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 11月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
Recent gcc versions generate unaligned accesses by default on ARMv6 and later processors. This patch ensures that the SCTLR.A bit is always cleared on such processors to avoid kernel traping before alignment_init() is called. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJohn Linn <John.Linn@xilinx.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2011 2 次提交
-
-
由 Nicolas Pitre 提交于
When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular machine class, the machine specific memory.h include file is no longer used and can be removed. In that case the equivalent information can be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT or by specifying the physical memory address at kernel configuration time. If/when all instances of mach/memory.h are removed then this symbol could be removed. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Nicolas Pitre 提交于
Some platforms (like OMAP not to name it) are doing rather complicated hacks just to determine the base UART address to use. Let's give their addruart macro some slack by providing an extra work register which will allow for much needed cleanups. This is basically a no-op as this commit is only adding the extra argument to the macro but no one is using it yet. Signed-off-by: Nnicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: NKevin Hilman <khilman@ti.com>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have the same value (21). This patch converts the PGDIR_* uses in the kernel to the PMD_* equivalent so that LPAE builds can reuse the same code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 8月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
This code can be removed now that MSM targets no longer need the 16-bit offsets for P2V. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 7月, 2011 1 次提交
-
-
由 Dave Martin 提交于
Currently, the documented kernel entry requirements are not explicit about whether the kernel should be entered in ARM or Thumb, leading to an ambiguitity about how to enter Thumb-2 kernels. As a result, the kernel is reliant on the zImage decompressor to enter the kernel proper in the correct instruction set state. This patch changes the boot entry protocol for head.S and Image to be the same as for zImage: in all cases, the kernel is now entered in ARM. Documentation/arm/Booting is updated to reflect this new policy. A different rule will be needed for Cortex-M class CPUs as and when support for those lands in mainline, since these CPUs don't support the ARM instruction set at all: a note is added to the effect that the kernel must be entered in Thumb on such systems. Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel mappings can be used exclusively on v6 and v7 cores where they are needed. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 5月, 2011 1 次提交
-
-
由 Grant Likely 提交于
The dtb is passed to the kernel via register r2, which is the same method that is used to pass an atags pointer. This patch modifies __vet_atags to not clear r2 when it encounters a dtb image. v2: fixed bugs pointed out by Nicolas Pitre Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 11 3月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
Adding Thumb2 support to the runtime patching of the virt_to_phys and phys_to_virt opcodes. Tested both the 8-bit and the 16-bit fixups, using different placements in memory to exercize all code paths. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: NDave Martin <dave.martin@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 2月, 2011 4 次提交
-
-
由 Rob Herring 提交于
If ATAGs or DTB pointer is not within first 1MB of RAM, then the boot params will not be mapped early enough, so map the 1MB region that r2 points to. Only map the first 1MB when r2 is 0. Some assembly improvements from Nicolas Pitre. Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
MSM's memory is aligned to 2MB, which is more than we can do with our existing method as we're limited to the upper 8 bits. Extend this by using two instructions to 16 bits, automatically selected when MSM is enabled. Acked-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This idea came from Nicolas, Eric Miao produced an initial version, which was then rewritten into this. Patch the physical to virtual translations at runtime. As we modify the code, this makes it incompatible with XIP kernels, but allows us to achieve this with minimal loss of performance. As many translations are of the form: physical = virtual + (PHYS_OFFSET - PAGE_OFFSET) virtual = physical - (PHYS_OFFSET - PAGE_OFFSET) we generate an 'add' instruction for __virt_to_phys(), and a 'sub' instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET - PAGE_OFFSET) by comparing the address prior to MMU initialization with where it should be once the MMU has been initialized, and place this constant into the above add/sub instructions. Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for the C-mode PHYS_OFFSET variable definition to use. At present, we are unable to support Realview with Sparsemem enabled as this uses a complex mapping function, and MSM as this requires a constant which will not fit in our math instruction. Add a module version magic string for this feature to prevent incompatible modules being loaded. Tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
head.S makes use of PHYS_OFFSET. When it becomes a variable, the assembler won't understand this. Compute PHYS_OFFSET by the following method. This code is linked at its virtual address, but run at before the MMU is enabled, so at his physical address. 1: .long . .long PAGE_OFFSET adr r0, 1b @ r0 = physical ',' ldmia r0, {r1, r2} @ r1 = virtual '.', r2 = PAGE_OFFSET sub r1, r0, r1 @ r1 = physical-virtual add r2, r2, r1 @ r2 = PAGE_OFFSET + physical-virtual @ := PHYS_OFFSET. Switch XIP users of PHYS_OFFSET to use PLAT_PHYS_OFFSET - we can't use this method for XIP kernels as the code doesn't execute in RAM. Tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Since the debug macros no longer depend on the machine type information, the machine type lookup can be deferred to setup_arch() in setup.c which simplifies the code somewhat. We also move the __error_a functionality into setup.c for displaying a message when a bad machine ID is passed to the kernel via the LL debug code. We also log this into the kernel ring buffer which makes it possible to retrieve the message via a debugger. Original idea from Grant Likely. Acked-by: NGrant Likely <grant.likely@secretlab.ca> Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 2月, 2011 1 次提交
-
-
由 Russell King 提交于
With certain configurations, we inline the unlock functions in modules, which results in SMP alternatives being created in modules. We need to fix those up when loading a module to prevent undefined instruction faults. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-