- 28 4月, 2016 3 次提交
-
-
由 Matt Fleming 提交于
Abolish the poorly named EFI memory map, 'memmap'. It is shadowed by a bunch of local definitions in various files and having two ways to access the EFI memory map ('efi.memmap' vs. 'memmap') is rather confusing. Furthermore, IA64 doesn't even provide this global object, which has caused issues when trying to write generic EFI memmap code. Replace all occurrences with efi.memmap, and convert the remaining iterator code to use for_each_efi_mem_desc(). Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Luck, Tony <tony.luck@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1461614832-17633-8-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Matt Fleming 提交于
Most of the users of for_each_efi_memory_desc() are equally happy iterating over the EFI memory map in efi.memmap instead of 'memmap', since the former is usually a pointer to the latter. For those users that want to specify an EFI memory map other than efi.memmap, that can be done using for_each_efi_memory_desc_in_map(). One such example is in the libstub code where the firmware is queried directly for the memory map, it gets iterated over, and then freed. This change goes part of the way toward deleting the global 'memmap' variable, which is not universally available on all architectures (notably IA64) and is rather poorly named. Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1461614832-17633-7-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Commit: 2eec5ded ("efi/arm-init: Use read-only early mappings") updated the early ARM UEFI init code to create the temporary, early mapping of the UEFI System table using read-only attributes, as a hardening measure against inadvertent modification. However, this still leaves the permanent, writable mapping of the UEFI System table, which is only ever referenced during invocations of UEFI Runtime Services, at which time the UEFI virtual mapping is available, which also covers the system table. (This is guaranteed by the fact that SetVirtualAddressMap(), which is a runtime service itself, converts various entries in the table to their virtual equivalents, which implies that the table must be covered by a RuntimeServicesData region that has the EFI_MEMORY_RUNTIME attribute.) So instead of creating this permanent mapping, record the virtual address of the system table inside the UEFI virtual mapping, and dereference that when accessing the table. This protects the contents of the system table from inadvertent (or deliberate) modification when no UEFI Runtime Services calls are in progress. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Cc: Borislav Petkov <bp@alien8.de> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1461614832-17633-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 4月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
Commit 4dffbfc4 ("arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP") updated the mapping logic of both the RuntimeServices regions as well as the kernel's copy of the UEFI memory map to set the MEMBLOCK_NOMAP flag, which causes these regions to be omitted from the kernel direct mapping, and from being covered by a struct page. For the RuntimeServices regions, this is an obvious win, since the contents of these regions have significance to the firmware executable code itself, and are mapped in the EFI page tables using attributes that are described in the UEFI memory map, and which may differ from the attributes we use for mapping system RAM. It also prevents the contents from being modified inadvertently, since the EFI page tables are only live during runtime service invocations. None of these concerns apply to the allocation that covers the UEFI memory map, since it is entirely owned by the kernel. Setting the MEMBLOCK_NOMAP on the region did allow us to use ioremap_cache() to map it both on arm64 and on ARM, since the latter does not allow ioremap_cache() to be used on regions that are covered by a struct page. The ioremap_cache() on ARM restriction will be lifted in the v4.7 timeframe, but in the mean time, it has been reported that commit 4dffbfc4 causes a regression on 64k granule kernels. This is due to the fact that, given the 64 KB page size, the region that we end up removing from the kernel direct mapping is rounded up to 64 KB, and this 64 KB page frame may be shared with the initrd when booting via GRUB (which does not align its EFI_LOADER_DATA allocations to 64 KB like the stub does). This will crash the kernel as soon as it tries to access the initrd. Since the issue is specific to arm64, revert back to memblock_reserve()'ing the UEFI memory map when running on arm64. This is a temporary fix for v4.5 and v4.6, and will be superseded in the v4.7 timeframe when we will be able to move back to memblock_reserve() unconditionally. Fixes: 4dffbfc4 ("arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP") Reported-by: NMark Salter <msalter@redhat.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Mark Langsdorf <mlangsdo@redhat.com> Cc: <stable@vger.kernel.org> # v4.5 Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
-
- 22 2月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
The early mappings of the EFI system table contents and the UEFI memory map are read-only from the OS point of view. So map them read-only to protect them from inadvertent modification. Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1455712566-16727-8-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 12月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
This refactors the EFI init and runtime code that will be shared between arm64 and ARM so that it can be built for both archs. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This splits off the early EFI init and runtime code that - discovers the EFI params and the memory map from the FDT, and installs the memblocks and config tables. - prepares and installs the EFI page tables so that UEFI Runtime Services can be invoked at the virtual address installed by the stub. This will allow it to be reused for 32-bit ARM. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Change the EFI memory reservation logic to use memblock_mark_nomap() rather than memblock_reserve() to mark UEFI reserved regions as occupied. In addition to reserving them against allocations done by memblock, this will also prevent them from being covered by the linear mapping. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 11月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Even though initcall return values are typically ignored, the prototype is to return 0 on success or a negative errno value on error. So fix the arm_enable_runtime_services() implementation to return 0 on conditions that are not in fact errors, and return a meaningful error code otherwise. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Add NULL return value checks to two invocations of early_memremap() in the UEFI init code. For the UEFI configuration tables, we just warn since we have a better chance of being able to report the issue in a way that can actually be noticed by a human operator if we don't abort right away. For the UEFI memory map, however, all we can do is panic() since we cannot proceed without a description of memory. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 11月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
The kernel may use a page granularity of 4K, 16K, or 64K depending on configuration. When mapping EFI runtime regions, we use memrange_efi_to_native to round the physical base address of a region down to a kernel page boundary, and round the size up to a kernel page boundary, adding the residue left over from rounding down the physical base address. We do not round down the virtual base address. In __create_mapping we account for the offset of the virtual base from a granule boundary, adding the residue to the size before rounding the base down to said granule boundary. Thus we account for the residue twice, and when the residue is non-zero will cause __create_mapping to map an additional page at the end of the region. Depending on the memory map, this page may be in a region we are not intended/permitted to map, or may clash with a different region that we wish to map. In typical cases, mapping the next item in the memory map will overwrite the erroneously created entry, as we sort the memory map in the stub. As __create_mapping can cope with base addresses which are not page aligned, we can instead rely on it to map the region appropriately, and simplify efi_virtmap_init by removing the unnecessary code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 11月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
As pointed out by Russell King in response to the proposed ARM version of this code, the sequence to switch between the UEFI runtime mapping and current's actual userland mapping (and vice versa) is potentially unsafe, since it leaves a time window between the switch to the new page tables and the TLB flush where speculative accesses may hit on stale global TLB entries. So instead, use non-global mappings, and perform the switch via the ordinary ASID-aware context switch routines. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 10月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
We have been getting away with using a void* for the physical address of the UEFI memory map, since, even on 32-bit platforms with 64-bit physical addresses, no truncation takes place if the memory map has been allocated by the firmware (which only uses 1:1 virtually addressable memory), which is usually the case. However, commit: 0f96a99d ("efi: Add "efi_fake_mem" boot option") adds code that clones and modifies the UEFI memory map, and the clone may live above 4 GB on 32-bit platforms. This means our use of void* for struct efi_memory_map::phys_map has graduated from 'incorrect but working' to 'incorrect and broken', and we need to fix it. So redefine struct efi_memory_map::phys_map as phys_addr_t, and get rid of a bunch of casts that are now unneeded. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: izumi.taku@jp.fujitsu.com Cc: kamezawa.hiroyu@jp.fujitsu.com Cc: linux-efi@vger.kernel.org Cc: matt.fleming@intel.com Link: http://lkml.kernel.org/r/1445593697-1342-1-git-send-email-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 10月, 2015 2 次提交
-
-
由 Leif Lindholm 提交于
As we now have a common debug infrastructure between core and arm64 efi, drop the bit of the interface passing verbose output flags around. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Leif Lindholm 提交于
Now that we have an efi=debug command line option in the core code, use this instead of the arm64-specific uefi_debug option. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Salter <msalter@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 07 10月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 10月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The new Properties Table feature introduced in UEFIv2.5 may split memory regions that cover PE/COFF memory images into separate code and data regions. Since these regions only differ in the type (runtime code vs runtime data) and the permission bits, but not in the memory type attributes (UC/WC/WT/WB), the spec does not require them to be aligned to 64 KB. Since the relative offset of PE/COFF .text and .data segments cannot be changed on the fly, this means that we can no longer pad out those regions to be mappable using 64 KB pages. Unfortunately, there is no annotation in the UEFI memory map that identifies data regions that were split off from a code region, so we must apply this logic to all adjacent runtime regions whose attributes only differ in the permission bits. So instead of rounding each memory region to 64 KB alignment at both ends, only round down regions that are not directly preceded by another runtime region with the same type attributes. Since the UEFI spec does not mandate that the memory map be sorted, this means we also need to sort it first. Note that this change will result in all EFI_MEMORY_RUNTIME regions whose start addresses are not aligned to the OS page size to be mapped with executable permissions (i.e., on kernels compiled with 64 KB pages). However, since these mappings are only active during the time that UEFI Runtime Services are being invoked, the window for abuse is rather small. Tested-by: NMark Salter <msalter@redhat.com> Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only] Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NMark Salter <msalter@redhat.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> # v4.0+ Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 7月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
At boot, the UTF-16 UEFI vendor string is copied from the system table into a char array with a size of 100 bytes. However, this size of 100 bytes is also used for memremapping() the source, which may not be sufficient if the vendor string exceeds 50 UTF-16 characters, and the placement of the vendor string inside a 4 KB page happens to leave the end unmapped. So use the correct '100 * sizeof(efi_char16_t)' for the size of the mapping. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: f84d0275 ("arm64: add EFI runtime services") Cc: <stable@vger.kernel.org> # 3.16+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 5月, 2015 1 次提交
-
-
由 Dan Williams 提交于
ACPI 6.0 formalizes e820-type-7 and efi-type-14 as persistent memory. Mark it "reserved" and allow it to be claimed by a persistent memory device driver. This definition is in addition to the Linux kernel's existing type-12 definition that was recently added in support of shipping platforms with NVDIMM support that predate ACPI 6.0 (which now classifies type-12 as OEM reserved). Note, /proc/iomem can be consulted for differentiating legacy "Persistent Memory (legacy)" E820_PRAM vs standard "Persistent Memory" E820_PMEM. Cc: Boaz Harrosh <boaz@plexistor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jens Axboe <axboe@fb.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <willy@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: NJeff Moyer <jmoyer@redhat.com> Acked-by: NAndy Lutomirski <luto@amacapital.net> Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Acked-by: NChristoph Hellwig <hch@lst.de> Tested-by: NToshi Kani <toshi.kani@hp.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 21 3月, 2015 1 次提交
-
-
由 Will Deacon 提交于
init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which contains kernel mappings) and is used as the active_mm for the idle thread. When restoring the pgd after an EFI call, we write current->active_mm into TTBR0. If the current task is actually the idle thread (e.g. when initialising the EFI RTC before entering userspace), then the TLB can erroneously populate itself with junk global entries as a result of speculative table walks. When we do eventually return to userspace, the task can end up hitting these junk mappings leading to lockups, corruption or crashes. This patch fixes the problem in the same way as the CPU suspend code by ensuring that we never switch to the init_mm in efi_set_pgd and instead point TTBR0 at the zero page. A check is also added to cpu_switch_mm to BUG if we get passed swapper_pg_dir. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: f3cdfd23 ("arm64/efi: move SetVirtualAddressMap() to UEFI stub") Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
If UEFI Runtime Services are available, they are preferred over direct PSCI calls or other methods to reset the system. For the reset case, we need to hook into machine_restart(), as the arm_pm_restart function pointer may be overwritten by modules. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: NArd Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
When remapping the UEFI memory map using ioremap_cache(), we have to deal with potential failure. Note that, even if the common case is for ioremap_cache() to return the existing linear mapping of the memory map, we cannot rely on that to be always the case, e.g., in the presence of a mem= kernel parameter. At the same time, remove a stale comment and move the memmap code together. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Now that we are calling SetVirtualAddressMap() from the stub, there is no need to reserve boot-only memory regions, which implies that there is also no reason to free them again later. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
In order to support kexec, the kernel needs to be able to deal with the state of the UEFI firmware after SetVirtualAddressMap() has been called. To avoid having separate code paths for non-kexec and kexec, let's move the call to SetVirtualAddressMap() to the stub: this will guarantee us that it will only be called once (since the stub is not executed during kexec), and ensures that the UEFI state is identical between kexec and normal boot. This implies that the layout of the virtual mapping needs to be created by the stub as well. All regions are rounded up to a naturally aligned multiple of 64 KB (for compatibility with 64k pages kernels) and recorded in the UEFI memory map. The kernel proper reads those values and installs the mappings in a dedicated set of page tables that are swapped in during UEFI Runtime Services calls. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NMatt Fleming <matt.fleming@intel.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 08 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The early ioremap support introduced by patch bf4b558e ("arm64: add early_ioremap support") failed to add a call to early_ioremap_reset() at an appropriate time. Without this call, invocations of early_ioremap etc. that are done too late will go unnoticed and may cause corruption. This is exactly what happened when the first user of this feature was added in patch f84d0275 ("arm64: add EFI runtime services"). The early mapping of the EFI memory map is unmapped during an early initcall, at which time the early ioremap support is long gone. Fix by adding the missing call to early_ioremap_reset() to setup_arch(), and move the offending early_memunmap() to right after the point where the early mapping of the EFI memory map is last used. Fixes: f84d0275 ("arm64: add EFI runtime services") Cc: <stable@vger.kernel.org> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 11月, 2014 4 次提交
-
-
由 Ard Biesheuvel 提交于
This sets the DMI string, containing system type, serial number, firmware version etc. as dump stack arch description, so that oopses and other kernel stack dumps automatically have this information included, if available. Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Yi Li 提交于
SMBIOS is important for server hardware vendors. It implements a spec for providing descriptive information about the platform. Things like serial numbers, physical layout of the ports, build configuration data, and the like. Signed-off-by: NYi Li <yi.li@linaro.org> Tested-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
The EFI_CONFIG_TABLES bit already gets set by efi_config_init(), so there is no reason to set it again after this function returns successfully. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Instead of reserving the memory regions based on which types we know need to be reserved, consider only regions of the following types as free for general use by the OS: EFI_LOADER_CODE EFI_LOADER_DATA EFI_BOOT_SERVICES_CODE EFI_BOOT_SERVICES_DATA EFI_CONVENTIONAL_MEMORY Note that this also fixes a problem with the original code, which would misidentify a EFI_RUNTIME_SERVICES_DATA region as not reserved if it does not have the EFI_MEMORY_RUNTIME attribute set. However, it is perfectly legal for the firmware not to request a virtual mapping for EFI_RUNTIME_SERVICES_DATA regions that contain configuration tables, in which case the EFI_MEMORY_RUNTIME attribute would not be set. Acked-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 04 10月, 2014 3 次提交
-
-
由 Laszlo Ersek 提交于
An example log excerpt demonstrating the change: Before the patch: > Processing EFI memory map: > 0x000040000000-0x000040000fff [Loader Data] > 0x000040001000-0x00004007ffff [Conventional Memory] > 0x000040080000-0x00004072afff [Loader Data] > 0x00004072b000-0x00005fdfffff [Conventional Memory] > 0x00005fe00000-0x00005fe0ffff [Loader Data] > 0x00005fe10000-0x0000964e8fff [Conventional Memory] > 0x0000964e9000-0x0000964e9fff [Loader Data] > 0x0000964ea000-0x000096c52fff [Loader Code] > 0x000096c53000-0x00009709dfff [Boot Code]* > 0x00009709e000-0x0000970b3fff [Runtime Code]* > 0x0000970b4000-0x0000970f4fff [Runtime Data]* > 0x0000970f5000-0x000097117fff [Runtime Code]* > 0x000097118000-0x000097199fff [Runtime Data]* > 0x00009719a000-0x0000971dffff [Runtime Code]* > 0x0000971e0000-0x0000997f8fff [Conventional Memory] > 0x0000997f9000-0x0000998f1fff [Boot Data]* > 0x0000998f2000-0x0000999eafff [Conventional Memory] > 0x0000999eb000-0x00009af09fff [Boot Data]* > 0x00009af0a000-0x00009af21fff [Conventional Memory] > 0x00009af22000-0x00009af46fff [Boot Data]* > 0x00009af47000-0x00009af5bfff [Conventional Memory] > 0x00009af5c000-0x00009afe1fff [Boot Data]* > 0x00009afe2000-0x00009afe2fff [Conventional Memory] > 0x00009afe3000-0x00009c01ffff [Boot Data]* > 0x00009c020000-0x00009efbffff [Conventional Memory] > 0x00009efc0000-0x00009f14efff [Boot Code]* > 0x00009f14f000-0x00009f162fff [Runtime Code]* > 0x00009f163000-0x00009f194fff [Runtime Data]* > 0x00009f195000-0x00009f197fff [Boot Data]* > 0x00009f198000-0x00009f198fff [Runtime Data]* > 0x00009f199000-0x00009f1acfff [Conventional Memory] > 0x00009f1ad000-0x00009f1affff [Boot Data]* > 0x00009f1b0000-0x00009f1b0fff [Runtime Data]* > 0x00009f1b1000-0x00009fffffff [Boot Data]* > 0x000004000000-0x000007ffffff [Memory Mapped I/O] > 0x000009010000-0x000009010fff [Memory Mapped I/O] After the patch: > Processing EFI memory map: > 0x000040000000-0x000040000fff [Loader Data | | | | | |WB|WT|WC|UC] > 0x000040001000-0x00004007ffff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x000040080000-0x00004072afff [Loader Data | | | | | |WB|WT|WC|UC] > 0x00004072b000-0x00005fdfffff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00005fe00000-0x00005fe0ffff [Loader Data | | | | | |WB|WT|WC|UC] > 0x00005fe10000-0x0000964e8fff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x0000964e9000-0x0000964e9fff [Loader Data | | | | | |WB|WT|WC|UC] > 0x0000964ea000-0x000096c52fff [Loader Code | | | | | |WB|WT|WC|UC] > 0x000096c53000-0x00009709dfff [Boot Code | | | | | |WB|WT|WC|UC]* > 0x00009709e000-0x0000970b3fff [Runtime Code |RUN| | | | |WB|WT|WC|UC]* > 0x0000970b4000-0x0000970f4fff [Runtime Data |RUN| | | | |WB|WT|WC|UC]* > 0x0000970f5000-0x000097117fff [Runtime Code |RUN| | | | |WB|WT|WC|UC]* > 0x000097118000-0x000097199fff [Runtime Data |RUN| | | | |WB|WT|WC|UC]* > 0x00009719a000-0x0000971dffff [Runtime Code |RUN| | | | |WB|WT|WC|UC]* > 0x0000971e0000-0x0000997f8fff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x0000997f9000-0x0000998f1fff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x0000998f2000-0x0000999eafff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x0000999eb000-0x00009af09fff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009af0a000-0x00009af21fff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00009af22000-0x00009af46fff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009af47000-0x00009af5bfff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00009af5c000-0x00009afe1fff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009afe2000-0x00009afe2fff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00009afe3000-0x00009c01ffff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009c020000-0x00009efbffff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00009efc0000-0x00009f14efff [Boot Code | | | | | |WB|WT|WC|UC]* > 0x00009f14f000-0x00009f162fff [Runtime Code |RUN| | | | |WB|WT|WC|UC]* > 0x00009f163000-0x00009f194fff [Runtime Data |RUN| | | | |WB|WT|WC|UC]* > 0x00009f195000-0x00009f197fff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009f198000-0x00009f198fff [Runtime Data |RUN| | | | |WB|WT|WC|UC]* > 0x00009f199000-0x00009f1acfff [Conventional Memory| | | | | |WB|WT|WC|UC] > 0x00009f1ad000-0x00009f1affff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x00009f1b0000-0x00009f1b0fff [Runtime Data |RUN| | | | |WB|WT|WC|UC]* > 0x00009f1b1000-0x00009fffffff [Boot Data | | | | | |WB|WT|WC|UC]* > 0x000004000000-0x000007ffffff [Memory Mapped I/O |RUN| | | | | | | |UC] > 0x000009010000-0x000009010fff [Memory Mapped I/O |RUN| | | | | | | |UC] The attribute bitmap is now displayed, in decoded form. Signed-off-by: NLaszlo Ersek <lersek@redhat.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Dave Young 提交于
In case efi runtime disabled via noefi kernel cmdline arm64_enter_virtual_mode should error out. At the same time move early_memunmap(memmap.map, mapsize) to the beginning of the function or it will leak early mem. Signed-off-by: NDave Young <dyoung@redhat.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Dave Young 提交于
There's one early memmap leak in uefi_init error path, fix it and slightly tune the error handling code. Signed-off-by: NDave Young <dyoung@redhat.com> Acked-by: NMark Salter <msalter@redhat.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 23 9月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 668ebd10. ... because of lots of warnings during boot if Linux isn't started as an EFI application: WARNING: CPU: 4 PID: 1 at /work/Linux/linux-2.6-aarch64/drivers/firmware/dmi_scan.c:591 dmi_matches+0x10c/0x110() dmi check: not initialized yet. Modules linked in: CPU: 4 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc4+ #606 Call trace: [<ffffffc000087fb0>] dump_backtrace+0x0/0x124 [<ffffffc0000880e4>] show_stack+0x10/0x1c [<ffffffc0004d58f8>] dump_stack+0x74/0xb8 [<ffffffc0000ab640>] warn_slowpath_common+0x8c/0xb4 [<ffffffc0000ab6b4>] warn_slowpath_fmt+0x4c/0x58 [<ffffffc0003f2d7c>] dmi_matches+0x108/0x110 [<ffffffc0003f2da8>] dmi_check_system+0x24/0x68 [<ffffffc0006974c4>] atkbd_init+0x10/0x34 [<ffffffc0000814ac>] do_one_initcall+0x88/0x1a0 [<ffffffc00067aab4>] kernel_init_freeable+0x148/0x1e8 [<ffffffc0004d2c64>] kernel_init+0x10/0xd4 Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 9月, 2014 1 次提交
-
-
由 Yi Li 提交于
SMBIOS is important for server hardware vendors. It implements a spec for providing descriptive information about the platform. Things like serial numbers, physical layout of the ports, build configuration data, and the like. This has been tested by dmidecode and lshw tools. This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(), as that is the point where the EFI Configuration Tables are registered as being available. It needs to be in an early_initcall anyway as dmi_id_init(), which is an arch_initcall itself, depends on dmi_scan_machine() having been called already. Signed-off-by: NYi Li <yi.li@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 8月, 2014 1 次提交
-
-
由 Semen Protsenko 提交于
"efi" global data structure contains "runtime_version" field which must be assigned in order to use it later in Runtime Services virtual calls (virt_efi_* functions). Before this patch "runtime_version" was unassigned (0), so each Runtime Service virtual call that checks revision would fail. Signed-off-by: NSemen Protsenko <semen.protsenko@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: <stable@vger.kernel.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Leif Lindholm 提交于
UEFI provides its own method for marking regions to reserve, via the memory map which is also used to initialise memblock. So when using the UEFI memory map, ignore any memreserve entries present in the DT. Reported-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 7月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
If we cannot resolve the virtual address of the UEFI System Table, its physical offset must be missing from the virtual memory map, and there is really no point in proceeding with installing the virtual memory map and the runtime services dispatch table. So back out gracefully. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-