- 13 10月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 10月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 9月, 2015 1 次提交
-
-
由 Will Deacon 提交于
When entering the kernel at EL2, we fail to initialise the MDCR_EL2 register which controls debug access and PMU capabilities at EL1. This patch ensures that the register is initialised so that all traps are disabled and all the PMU counters are available to the host. When a guest is scheduled, KVM takes care to configure trapping appropriately. Cc: <stable@vger.kernel.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 6月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel image") changed the early page table code so that the entire kernel Image is covered by the identity map. This allows functions that need to enable or disable the MMU to reside anywhere in the kernel Image. However, this change has the unfortunate side effect that the Image cannot cross a physical 512 MB alignment boundary anymore, since the early page table code cannot deal with the Image crossing a /virtual/ 512 MB alignment boundary. So instead, reduce the ID map to a single page, that is populated by the contents of the .idmap.text section. Only three functions reside there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset(). If new code is introduced that needs to manipulate the MMU state, it should be added to this section as well. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 3月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the guarnatee that the kernel Image is clean, not invalid in the caches, and therefore we might read a stale value once the MMU is enabled. This patch ensures we invalidate the corresponding cacheline after the write as we do for all other data written before we set SCTLR_EL1.{C.M}, guaranteeing that the value will be visible later. We rely on the DSBs in __create_page_tables to complete the maintenance. Signed-off-by: NMark Rutland <mark.rutland@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
After writing the page tables, we use __inval_cache_range to invalidate any stale cache entries. Strongly Ordered memory accesses are not ordered w.r.t. cache maintenance instructions, and hence explicit memory barriers are required to provide this ordering. However, __inval_cache_range was written to be used on Normal Cacheable memory once the MMU and caches are on, and does not have any barriers prior to the DC instructions. This patch adds a DMB between the page tables being written and the corresponding cachelines being invalidated, ensuring that the invalidation makes the new data visible to subsequent cacheable accesses. A barrier is not required before the prior invalidate as we do not access the page table memory area prior to this, and earlier barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures ordering w.r.t. any stores performed prior to entering Linux. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 3月, 2015 7 次提交
-
-
由 Ard Biesheuvel 提交于
According to the arm64 boot protocol, registers x1 to x3 should be zero upon kernel entry, and non-zero values are reserved for future use. This future use is going to be problematic if we never enforce the current rules, so start enforcing them now, by emitting a warning if non-zero values are detected. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This removes the function __calc_phys_offset and all open coded virtual to physical address translations using the offset kept in x28. Instead, just use absolute or PC-relative symbol references as appropriate when referring to virtual or physical addresses, respectively. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Enabling of the MMU is split into two functions, with an align and a branch in the middle. On arm64, the entire kernel Image is ID mapped so this is really not necessary, and we can just merge it into a single function. Also replaces an open coded adrp/add reference to __enable_mmu pair with adr_l. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Replace the confusing virtual/physical address arithmetic with a simple PC-relative reference. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This removes the confusing __switch_data object from head.S, and replaces it with standard PC-relative references to the various symbols it encapsulates. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The global processor_id is assigned the MIDR_EL1 value of the boot CPU in the early init code, but is never referenced afterwards. As the relevance of the MIDR_EL1 value of the boot CPU is debatable anyway, especially under big.LITTLE, let's remove it before anyone starts using it. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
struct cpu_table is an artifact left from the (very) early days of the arm64 port, and its only real use is to allow the most beautiful "AArch64 Processor" string to be displayed at boot time. Really? Yes, really. Let's get rid of it. In order to avoid another BogoMips-gate, the aforementioned string is preserved. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 3月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Commit 828e9834 ("arm64: head: create a new function for setting the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value replacing uses of zero. However it failed to update __boot_cpu_mode appropriately. A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1]. Later is_hyp_mode_mismatched() determines there to be a mismatch if __boot_cpu_mode[0] != __boot_cpu_mode[1]. If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value of zero, and is_hyp_mode_mismatched will erroneously determine that the boot modes are mismatched. This hasn't been a problem so far, but later patches which will make use of is_hyp_mode_mismatched() expect it to work correctly. This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing the erroneous mismatch detection when all CPUs are booted at EL1. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Another one for the big head.S spring cleaning: the label should be after the .align or it may point to the padding. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
The head.text section is intended to be run at early bootup before any of the regular kernel mappings have been setup. Parts of head.text may be freed back into the buddy allocator due to TEXT_OFFSET so for security requirements this memory must not be executable. The suspend/resume/hotplug code path requires some of these head.S functions to run however which means they need to be executable. Support these conflicting requirements by moving the few head.text functions that need to be executable to the text section which has the appropriate page table permissions. Tested-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
The hyp stub vectors are currently loaded using adr. This instruction has a +/- 1MB range for the loading address. If the alignment for sections is changed the address may be more than 1MB away, resulting in reclocation errors. Switch to using adrp for getting the address to ensure we aren't affected by the location of the __hyp_stub_vectors. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 11月, 2014 3 次提交
-
-
由 Ard Biesheuvel 提交于
Change our PE/COFF header to use the minimum file alignment of 512 bytes (0x200), as mandated by the PE/COFF spec v8.3 Also update the linker script so that the Image file itself is also a round multiple of FileAlignment. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Position independent AArch64 code needs to be linked and loaded at the same relative offset from a 4 KB boundary, or adrp/add and adrp/ldr pairs will not work correctly. (This is how PC relative symbol references with a 4 GB reach are emitted) We need to declare this in the PE/COFF header, otherwise the PE/COFF loader may load the Image and invoke the stub at an offset which violates this rule. Reviewed-by: NRoy Franz <roy.franz@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
After the EFI stub has done its business, it jumps into the kernel by branching to offset #0 of the loaded Image, which is where it expects to find the header containing a 'branch to stext' instruction. However, the UEFI spec 2.1.1 states the following regarding PE/COFF image loading: "A UEFI image is loaded into memory through the LoadImage() Boot Service. This service loads an image with a PE32+ format into memory. This PE32+ loader is required to load all sections of the PE32+ image into memory." In other words, it is /not/ required to load parts of the image that are not covered by a PE/COFF section, so it may not have loaded the header at the expected offset, as it is not covered by any PE/COFF section. So instead, jump to 'stext' directly, which is at the base of the PE/COFF .text section, by supplying a symbol 'stext_offset' to efi-entry.o which contains the relative offset of stext into the Image. Also replace other open coded calculations of the same value with a reference to 'stext_offset' Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 08 9月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
The static memory footprint of a kernel Image at boot is larger than the Image file itself. Things like .bss data and initial page tables are allocated statically but populated dynamically so their content is not contained in the Image file. However, if EFI (or GRUB) has loaded the Image at precisely the desired offset of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have to make sure that the allocation done by the PE/COFF loader is large enough. Fix this by growing the PE/COFF .text section to cover the entire static memory footprint. The part of the section that is not covered by the payload will be zero initialised by the PE/COFF loader. Acked-by: NMark Salter <msalter@redhat.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 8月, 2014 1 次提交
-
-
由 Geoff Levand 提交于
Remove an unused local variable from head.S. It seems this was never used even from the initial commit 9703d9d7 (arm64: Kernel booting and initialisation), and is a left over from a previous implementation of __calc_phys_offset. Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and the embedded EFI stub is executed in place. The EFI stub relocates the Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into the kernel proper. In AArch64, PC relative symbol references are emitted using adrp/add or adrp/ldr pairs, where the offset into a 4 kB page is resolved using a separate :lo12: relocation. This implicitly assumes that the code will always be executed at the same relative offset with respect to a 4 kB boundary, or the references will point to the wrong address. This means we should link the kernel at a 4 kB aligned base address in order to remain compatible with the base address the UEFI loader uses when doing the initial load of Image. So update the code that generates TEXT_OFFSET to choose a multiple of 4 kB. At the same time, update the code so it chooses from the interval [0..2MB) as the author originally intended. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 7月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
GICv3 introduces new system registers accessible with the full msr/mrs syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent binutils understand the new syntax. This patch introduces msr_s/mrs_s assembly macros which generate the equivalent instructions above and converts the existing GICv3 code (both drivers/irqchip/ and arch/arm64/kernel/). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NOlof Johansson <olof@lixom.net> Tested-by: NOlof Johansson <olof@lixom.net> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com>
-
- 23 7月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
This patch allows support for 3 levels of page tables with 64KB page configuration allowing 48-bit VA space. The pgd is no longer a full PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully populated (pgd_alloc falls back to kzalloc). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
This patch adds a create_table_entry macro which is used to populate pgd and pud entries, also reducing the number of arguments for create_pgd_entry. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
Rather than having several Kconfig options, define int ARM64_PGTABLE_LEVELS which will be also useful in converting some of the pgtable macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
- 10 7月, 2014 4 次提交
-
-
由 Mark Rutland 提交于
The arm64 Image header contains a text_offset field which bootloaders are supposed to read to determine the offset (from a 2MB aligned "start of memory" per booting.txt) at which to load the kernel. The offset is not well respected by bootloaders at present, and due to the lack of variation there is little incentive to support it. This is unfortunate for the sake of future kernels where we may wish to vary the text offset (even zeroing it). This patch adds options to arm64 to enable fuzz-testing of text_offset. CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random 16-byte aligned value value in the range [0..2MB) upon a build of the kernel. It is recommended that distribution kernels enable randomization to test bootloaders such that any compliance issues can be fixed early. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently the kernel Image is stripped of everything past the initial stack, and at runtime the memory is initialised and used by the kernel. This makes the effective minimum memory footprint of the kernel larger than the size of the loaded binary, though bootloaders have no mechanism to identify how large this minimum memory footprint is. This makes it difficult to choose safe locations to place both the kernel and other binaries required at boot (DTB, initrd, etc), such that the kernel won't clobber said binaries or other reserved memory during initialisation. Additionally when big endian support was added the image load offset was overlooked, and is currently of an arbitrary endianness, which makes it difficult for bootloaders to make use of it. It seems that bootloaders aren't respecting the image load offset at present anyway, and are assuming that offset 0x80000 will always be correct. This patch adds an effective image size to the kernel header which describes the amount of memory from the start of the kernel Image binary which the kernel expects to use before detecting memory and handling any memory reservations. This can be used by bootloaders to choose suitable locations to load the kernel and/or other binaries such that the kernel will not clobber any memory unexpectedly. As before, memory reservations are required to prevent the kernel from clobbering these locations later. Both the image load offset and the effective image size are forced to be little-endian regardless of the native endianness of the kernel to enable bootloaders to load a kernel of arbitrary endianness. Bootloaders which wish to make use of the load offset can inspect the effective image size field for a non-zero value to determine if the offset is of a known endianness. To enable software to determine the endinanness of the kernel as may be required for certain use-cases, a new flags field (also little-endian) is added to the kernel header to export this information. The documentation is updated to clarify these details. To discourage future assumptions regarding the value of text_offset, the value at this point in time is removed from the main flow of the documentation (though kept as a compatibility note). Some minor formatting issues in the documentation are also corrected. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Kevin Hilman <kevin.hilman@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't span any page boundary, which simplifies the idmap and spares us requiring an additional page table to map half of the function. In keeping with other important requirements in architecture code, this fact is undocumented. Additionally, as the function consists of three instructions totalling 12 bytes with no literal pool data, a smaller alignment of 16 bytes would be sufficient. This patch reduces the alignment to 16 bytes and documents the underlying reason for the alignment. This reduces the required alignment of the entire .head.text section from 64 bytes to 16 bytes, though it may still be aligned to a larger value depending on TEXT_OFFSET. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
The Generic Interrupt Controller (version 3) offers services that are similar to GICv2, with a number of additional features: - Affinity routing based on the CPU MPIDR (ARE) - System register for the CPU interfaces (SRE) - Support for more that 8 CPUs - Locality-specific Peripheral Interrupts (LPIs) - Interrupt Translation Services (ITS) This patch adds preliminary support for GICv3 with ARE and SRE, non-secure mode only. It relies on higher exception levels to grant ARE and SRE access. Support for LPI and ITS will be added at a later time. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Reviewed-by: NZi Shen Lim <zlim@broadcom.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NTirumalesh Chalamarla <tchalamarla@cavium.com> Reviewed-by: NYun Wu <wuyun.wu@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com> Tested-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 04 7月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
The CurrentEL system register reports the Current Exception Level of the CPU. It doesn't say anything about the stack handling, and yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h. It works by chance because PSR_MODE_EL2t happens to match the right bits, but that's otherwise a very bad idea. Just check for the EL value instead. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-