- 21 3月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a DSB necessary to complete I-cache maintenance in the primary boot path, and hence stale instructions may still be present in the I-cache and may be executed until the I-cache maintenance naturally completes. Since commit 8ec41987 ("arm64: mm: ensure patched kernel text is fetched from PoU"), all CPUs invalidate their I-caches after their MMU is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may have been fetched from the PoC into I-caches. We never patch text expected to be executed with the MMU off. Thus, it is unnecessary to perform broadcast I-cache maintenance in the primary boot path. This patch reduces the scope of the I-cache maintenance to the local CPU, and adds the missing DSB with similar scope, matching prior maintenance in the primary boot path. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesehvuel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 2月, 2016 1 次提交
-
-
由 Suzuki K Poulose 提交于
A secondary CPU could fail to come online due to insufficient capabilities and could simply die or loop in the kernel. e.g, a CPU with no support for the selected kernel PAGE_SIZE loops in kernel with MMU turned off. or a hotplugged CPU which doesn't have one of the advertised system capability will die during the activation. There is no way to synchronise the status of the failing CPU back to the master. This patch solves the issue by adding a field to the secondary_data which can be updated by the failing CPU. If the secondary CPU fails even before turning the MMU on, it updates the status in a special variable reserved in the head.txt section to make sure that the update can be cache invalidated safely without possible sharing of cache write back granule. Here are the possible states : -1. CPU_MMU_OFF - Initial value set by the master CPU, this value indicates that the CPU could not turn the MMU on, hence the status could not be reliably updated in the secondary_data. Instead, the CPU has updated the status @ __early_cpu_boot_status. 0. CPU_BOOT_SUCCESS - CPU has booted successfully. 1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the master CPU to synchronise by issuing a cpu_ops->cpu_kill. 2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is looping in the kernel. This information could be used by say, kexec to check if it is really safe to do a kexec reboot. 3. CPU_PANIC_KERNEL - CPU detected some serious issues which requires kernel to crash immediately. The secondary CPU cannot call panic() until it has initialised the GIC. This flag can be used to instruct the master to do so. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [catalin.marinas@arm.com: conflict resolution] [catalin.marinas@arm.com: converted "status" from int to long] [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 2月, 2016 4 次提交
-
-
由 Ard Biesheuvel 提交于
This adds support for KASLR is implemented, based on entropy provided by the bootloader in the /chosen/kaslr-seed DT property. Depending on the size of the address space (VA_BITS) and the page size, the entropy in the virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all 4 levels), with the sidenote that displacements that result in the kernel image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB granule kernels, respectively) are not allowed, and will be rounded up to an acceptable value. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is randomized independently from the core kernel. This makes it less likely that the location of core kernel data structures can be determined by an adversary, but causes all function calls from modules into the core kernel to be resolved via entries in the module PLTs. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is randomized by choosing a page aligned 128 MB region inside the interval [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of entropy (depending on page size), independently of the kernel randomization, but still guarantees that modules are within the range of relative branch and jump instructions (with the caveat that, since the module region is shared with other uses of the vmalloc area, modules may need to be loaded further away if the module region is exhausted) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This implements CONFIG_RELOCATABLE, which links the final vmlinux image with a dynamic relocation section, allowing the early boot code to perform a relocation to a different virtual address at runtime. This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Before implementing KASLR for arm64 by building a self-relocating PIE executable, we have to ensure that values we use before the relocation routine is executed are not subject to dynamic relocation themselves. This applies not only to virtual addresses, but also to values that are supplied by the linker at build time and relocated using R_AARCH64_ABS64 relocations. So instead, use assemble time constants, or force the use of static relocations by folding the constants into the instructions. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Unfortunately, the current way of using the linker to emit build time constants into the Image header will no longer work once we switch to the use of PIE executables. The reason is that such constants are emitted into the binary using R_AARCH64_ABS64 relocations, which are resolved at runtime, not at build time, and the places targeted by those relocations will contain zeroes before that. So refactor the endian swapping linker script constant generation code so that it emits the upper and lower 32-bit words separately. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 2月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image. As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Special care needs to be taken for dealing with memory limits passed via mem=, since the generic implementation clips memory top down, which may clip the kernel image itself if it is loaded high up in memory. To deal with this case, we simply add back the memory covering the kernel image, which may result in more memory to be retained than was passed as a mem= parameter. Since mem= should not be considered a production feature, a panic notifier handler is installed that dumps the memory limit at panic time if one was set. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This introduces the preprocessor symbol KIMAGE_VADDR which will serve as the symbolic virtual base of the kernel region, i.e., the kernel's virtual offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being equal to PAGE_OFFSET, but in the future, it will be moved below it once we move the kernel virtual mapping out of the linear mapping. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 2月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Currently the zero page is set up in paging_init, and thus we cannot use the zero page earlier. We use the zero page as a reserved TTBR value from which no TLB entries may be allocated (e.g. when uninstalling the idmap). To enable such usage earlier (as may be required for invasive changes to the kernel page tables), and to minimise the time that the idmap is active, we need to be able to use the zero page before paging_init. This patch follows the example set by x86, by allocating the zero page at compile time, in .bss. This means that the zero page itself is available immediately upon entry to start_kernel (as we zero .bss before this), and also means that the zero page takes up no space in the raw Image binary. The associated struct page is allocated in bootmem_init, and remains unavailable until this time. Outside of arch code, the only users of empty_zero_page assume that the empty_zero_page symbol refers to the zeroed memory itself, and that ZERO_PAGE(x) must be used to acquire the associated struct page, following the example of x86. This patch also brings arm64 inline with these assumptions. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 1月, 2016 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The Performance Monitors extension is an optional feature of the AArch64 architecture, therefore, in order to access Performance Monitors registers safely, the kernel should detect the architected PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field before accessing them. This patch implements a guard by reading the ID_AA64DFR0_EL1 register PMUVer field to detect the architected PMU presence and prevent accessing PMU system registers if the Performance Monitors extension is not implemented in the core. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Fixes: 60792ad3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore") Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 1月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Currently we use an open-coded memzero to clear the BSS. As it is a trivial implementation, it is sub-optimal. Our optimised memset doesn't use the stack, is position-independent, and for the memzero case can use of DC ZVA to clear large blocks efficiently. In __mmap_switched the MMU is on and there are no live caller-saved registers, so we can safely call an uninstrumented memset. This patch changes __mmap_switched to use memset when clearing the BSS. We use the __pi_memset alias so as to avoid any instrumentation in all kernel configurations. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 12月, 2015 1 次提交
-
-
由 Jungseok Lee 提交于
There is need for figuring out how to manage struct thread_info data when IRQ stack is introduced. struct thread_info information should be copied to IRQ stack under the current thread_info calculation logic whenever context switching is invoked. This is too expensive to keep supporting the approach. Instead, this patch pays attention to sp_el0 which is an unused scratch register in EL1 context. sp_el0 utilization not only simplifies the management, but also prevents text section size from being increased largely due to static allocated IRQ stack as removing masking operation using THREAD_SIZE in many places. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2015 3 次提交
-
-
由 Suzuki K. Poulose 提交于
Ensure that the selected page size is supported by the CPU(s). If it doesn't park it. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At the moment, we only support maximum of 3-level page table for swapper. With 48bit VA, 64K has only 3 levels and 4K uses section mapping. Add support for 4-level page table for swapper, needed by 16K pages. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Move the kernel pagetable (both swapper and idmap) definitions from the generic asm/page.h to a new file, asm/kernel-pgtable.h. This is mostly a cosmetic change, to clean up the asm/page.h to get rid of the arch specific details which are not needed by the generic code. Also renames the symbols to prevent conflicts. e.g, BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 10月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 10月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 10月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
Contrary to what was originally expected, EL3 firmware can (for whatever reason) disable GICv3 system register access. In this case, the kernel explodes very early. Work around this by testing if the SRE bit sticks or not. If it doesn't, abort the GICv3 setup, and pray that the firmware has passed a DT that doesn't contain a GICv3 node. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 15 9月, 2015 1 次提交
-
-
由 Will Deacon 提交于
When entering the kernel at EL2, we fail to initialise the MDCR_EL2 register which controls debug access and PMU capabilities at EL1. This patch ensures that the register is initialised so that all traps are disabled and all the PMU counters are available to the host. When a guest is scheduled, KVM takes care to configure trapping appropriately. Cc: <stable@vger.kernel.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 6月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel image") changed the early page table code so that the entire kernel Image is covered by the identity map. This allows functions that need to enable or disable the MMU to reside anywhere in the kernel Image. However, this change has the unfortunate side effect that the Image cannot cross a physical 512 MB alignment boundary anymore, since the early page table code cannot deal with the Image crossing a /virtual/ 512 MB alignment boundary. So instead, reduce the ID map to a single page, that is populated by the contents of the .idmap.text section. Only three functions reside there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset(). If new code is introduced that needs to manipulate the MMU state, it should be added to this section as well. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 3月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the guarnatee that the kernel Image is clean, not invalid in the caches, and therefore we might read a stale value once the MMU is enabled. This patch ensures we invalidate the corresponding cacheline after the write as we do for all other data written before we set SCTLR_EL1.{C.M}, guaranteeing that the value will be visible later. We rely on the DSBs in __create_page_tables to complete the maintenance. Signed-off-by: NMark Rutland <mark.rutland@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
After writing the page tables, we use __inval_cache_range to invalidate any stale cache entries. Strongly Ordered memory accesses are not ordered w.r.t. cache maintenance instructions, and hence explicit memory barriers are required to provide this ordering. However, __inval_cache_range was written to be used on Normal Cacheable memory once the MMU and caches are on, and does not have any barriers prior to the DC instructions. This patch adds a DMB between the page tables being written and the corresponding cachelines being invalidated, ensuring that the invalidation makes the new data visible to subsequent cacheable accesses. A barrier is not required before the prior invalidate as we do not access the page table memory area prior to this, and earlier barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures ordering w.r.t. any stores performed prior to entering Linux. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 3月, 2015 7 次提交
-
-
由 Ard Biesheuvel 提交于
According to the arm64 boot protocol, registers x1 to x3 should be zero upon kernel entry, and non-zero values are reserved for future use. This future use is going to be problematic if we never enforce the current rules, so start enforcing them now, by emitting a warning if non-zero values are detected. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This removes the function __calc_phys_offset and all open coded virtual to physical address translations using the offset kept in x28. Instead, just use absolute or PC-relative symbol references as appropriate when referring to virtual or physical addresses, respectively. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Enabling of the MMU is split into two functions, with an align and a branch in the middle. On arm64, the entire kernel Image is ID mapped so this is really not necessary, and we can just merge it into a single function. Also replaces an open coded adrp/add reference to __enable_mmu pair with adr_l. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Replace the confusing virtual/physical address arithmetic with a simple PC-relative reference. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This removes the confusing __switch_data object from head.S, and replaces it with standard PC-relative references to the various symbols it encapsulates. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The global processor_id is assigned the MIDR_EL1 value of the boot CPU in the early init code, but is never referenced afterwards. As the relevance of the MIDR_EL1 value of the boot CPU is debatable anyway, especially under big.LITTLE, let's remove it before anyone starts using it. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
struct cpu_table is an artifact left from the (very) early days of the arm64 port, and its only real use is to allow the most beautiful "AArch64 Processor" string to be displayed at boot time. Really? Yes, really. Let's get rid of it. In order to avoid another BogoMips-gate, the aforementioned string is preserved. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 3月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Commit 828e9834 ("arm64: head: create a new function for setting the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value replacing uses of zero. However it failed to update __boot_cpu_mode appropriately. A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1]. Later is_hyp_mode_mismatched() determines there to be a mismatch if __boot_cpu_mode[0] != __boot_cpu_mode[1]. If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value of zero, and is_hyp_mode_mismatched will erroneously determine that the boot modes are mismatched. This hasn't been a problem so far, but later patches which will make use of is_hyp_mode_mismatched() expect it to work correctly. This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing the erroneous mismatch detection when all CPUs are booted at EL1. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Another one for the big head.S spring cleaning: the label should be after the .align or it may point to the padding. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
The head.text section is intended to be run at early bootup before any of the regular kernel mappings have been setup. Parts of head.text may be freed back into the buddy allocator due to TEXT_OFFSET so for security requirements this memory must not be executable. The suspend/resume/hotplug code path requires some of these head.S functions to run however which means they need to be executable. Support these conflicting requirements by moving the few head.text functions that need to be executable to the text section which has the appropriate page table permissions. Tested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
The hyp stub vectors are currently loaded using adr. This instruction has a +/- 1MB range for the loading address. If the alignment for sections is changed the address may be more than 1MB away, resulting in reclocation errors. Switch to using adrp for getting the address to ensure we aren't affected by the location of the __hyp_stub_vectors. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 11月, 2014 3 次提交
-
-
由 Ard Biesheuvel 提交于
Change our PE/COFF header to use the minimum file alignment of 512 bytes (0x200), as mandated by the PE/COFF spec v8.3 Also update the linker script so that the Image file itself is also a round multiple of FileAlignment. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Position independent AArch64 code needs to be linked and loaded at the same relative offset from a 4 KB boundary, or adrp/add and adrp/ldr pairs will not work correctly. (This is how PC relative symbol references with a 4 GB reach are emitted) We need to declare this in the PE/COFF header, otherwise the PE/COFF loader may load the Image and invoke the stub at an offset which violates this rule. Reviewed-by: NRoy Franz <roy.franz@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
After the EFI stub has done its business, it jumps into the kernel by branching to offset #0 of the loaded Image, which is where it expects to find the header containing a 'branch to stext' instruction. However, the UEFI spec 2.1.1 states the following regarding PE/COFF image loading: "A UEFI image is loaded into memory through the LoadImage() Boot Service. This service loads an image with a PE32+ format into memory. This PE32+ loader is required to load all sections of the PE32+ image into memory." In other words, it is /not/ required to load parts of the image that are not covered by a PE/COFF section, so it may not have loaded the header at the expected offset, as it is not covered by any PE/COFF section. So instead, jump to 'stext' directly, which is at the base of the PE/COFF .text section, by supplying a symbol 'stext_offset' to efi-entry.o which contains the relative offset of stext into the Image. Also replace other open coded calculations of the same value with a reference to 'stext_offset' Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-