- 11 1月, 2020 14 次提交
-
-
由 Ard Biesheuvel 提交于
Remove some code that is guaranteed to be unreachable, given that we have already bailed by this time if EFI_OLD_MEMMAP is set. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-15-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The logic in __efi_enter_virtual_mode() does a number of steps in sequence, all of which may fail in one way or the other. In most cases, we simply print an error and disable EFI runtime services support, but in some cases, we BUG() or panic() and bring down the system when encountering conditions that we could easily handle in the same way. While at it, replace a pointless page-to-virt-phys conversion with one that goes straight from struct page to physical. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-14-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Clean up the efi_systab_init() routine which maps the EFI system table and copies the relevant pieces of data out of it. The current routine is very difficult to read, so let's clean that up. Also, switch to a R/O mapping of the system table since that is all we need. Finally, use a plain u64 variable to record the physical address of the system table instead of pointlessly stashing it in a struct efi that is never used for anything else. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-13-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The routines efi_runtime_init32() and efi_runtime_init64() are almost indistinguishable, and the only relevant difference is the offset in the runtime struct from where to obtain the physical address of the SetVirtualAddressMap() routine. However, this address is only used once, when installing the virtual address map that the OS will use to invoke EFI runtime services, and at the time of the call, we will necessarily be running with a 1:1 mapping, and so there is no need to do the map/unmap dance here to retrieve the address. In fact, in the preceding changes to these users, we stopped using the address recorded here entirely. So let's just get rid of all this code since it no longer serves a purpose. While at it, tweak the logic so that we handle unsupported and disable EFI runtime services in the same way, and unmap the EFI memory map in both cases. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-12-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Calling 32-bit EFI runtime services from a 64-bit OS involves switching back to the flat mapping with a stack carved out of memory that is 32-bit addressable. There is no need to actually execute the 64-bit part of this routine from the flat mapping as well, as long as the entry and return address fit in 32 bits. There is also no need to preserve part of the calling context in global variables: we can simply push the old stack pointer value to the new stack, and keep the return address from the code32 section in EBX. While at it, move the conditional check whether to invoke the mixed mode version of SetVirtualAddressMap() into the 64-bit implementation of the wrapper routine. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-11-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The efi_call() wrapper used to invoke EFI runtime services serves a number of purposes: - realign the stack to 16 bytes - preserve FP and CR0 register state - translate from SysV to MS calling convention. Preserving CR0.TS is no longer necessary in Linux, and preserving the FP register state is also redundant in most cases, since efi_call() is almost always used from within the scope of a pair of kernel_fpu_begin()/ kernel_fpu_end() calls, with the exception of the early call to SetVirtualAddressMap() and the SGI UV support code. So let's add a pair of kernel_fpu_begin()/_end() calls there as well, and remove the unnecessary code from the assembly implementation of efi_call(), and only keep the pieces that deal with the stack alignment and the ABI translation. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-10-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The variadic efi_call_phys() wrapper that exists on i386 was originally created to call into any EFI firmware runtime service, but in practice, we only use it once, to call SetVirtualAddressMap() during early boot. The flexibility provided by the variadic nature also makes it type unsafe, and makes the assembler code more complicated than needed, since it has to deal with an unknown number of arguments living on the stack. So clean this up, by renaming the helper to efi_call_svam(), and dropping the unneeded complexity. Let's also drop the reference to the efi_phys struct and grab the address from the EFI system table directly. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-9-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Split the phys_efi_set_virtual_address_map() routine into 32 and 64 bit versions, so we can simplify them individually in subsequent patches. There is very little overlap between the logic anyway, and this has already been factored out in prolog/epilog routines which are completely different between 32 bit and 64 bit. So let's take it one step further, and get rid of the overlap completely. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-8-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
In a subsequent patch, we will fold the prolog/epilog routines that are part of the support code to call SetVirtualAddressMap() with a 1:1 mapping into the callers. However, the 64-bit version mostly consists of ugly mapping code that is only used when efi=old_map is in effect, which is extremely rare. So let's move this code out of the way so it does not clutter the common code. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-7-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
All EFI firmware call prototypes have been annotated as __efiapi, permitting us to attach attributes regarding the calling convention by overriding __efiapi to an architecture specific value. On 32-bit x86, EFI firmware calls use the plain calling convention where all arguments are passed via the stack, and cleaned up by the caller. Let's add this to the __efiapi definition so we no longer need to cast the function pointers before invoking them. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-6-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Fix a couple of issues with the way we map and copy the vendor string: - we map only 2 bytes, which usually works since you get at least a page, but if the vendor string happens to cross a page boundary, a crash will result - only call early_memunmap() if early_memremap() succeeded, or we will call it with a NULL address which it doesn't like, - while at it, switch to early_memremap_ro(), and array indexing rather than pointer dereferencing to read the CHAR16 characters. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Fixes: 5b83683f ("x86: EFI runtime service support") Link: https://lkml.kernel.org/r/20200103113953.9571-5-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Commit a8147dba ("efi/x86: Rename efi_is_native() to efi_is_mixed()") renamed and refactored efi_is_native() into efi_is_mixed(), but failed to take into account that these are not diametrical opposites. Mixed mode is a construct that permits 64-bit kernels to boot on 32-bit firmware, but there is another non-native combination which is supported, i.e., 32-bit kernels booting on 64-bit firmware, but only for boot and not for runtime services. Also, mixed mode can be disabled in Kconfig, in which case the 64-bit kernel can still be booted from 32-bit firmware, but without access to runtime services. Due to this oversight, efi_runtime_supported() now incorrectly returns true for such configurations, resulting in crashes at boot. So fix this by making efi_runtime_supported() aware of this. As a side effect, some efi_thunk_xxx() stubs have become obsolete, so remove them as well. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-4-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Commit c3710de5 ("efi/libstub/x86: Drop __efi_early() export and efi_config struct") introduced a reference from C code in eboot.c to the startup_32 symbol defined in the .S startup code. This results in a GOT based reference to startup_32, and since GOT entries carry absolute addresses, they need to be fixed up before they can be used. On modern toolchains (binutils 2.26 or later), this reference is relaxed into a R_386_GOTOFF relocation (or the analogous X86_64 one) which never uses the absolute address in the entry, and so we get away with not fixing up the GOT table before calling the EFI entry point. However, GCC 4.6 combined with a binutils of the era (2.24) will produce a true GOT indirected reference, resulting in a wrong value to be returned for the address of startup_32() if the boot code is not running at the address it was linked at. Fortunately, we can easily override this behavior, and force GCC to emit the GOTOFF relocations explicitly, by setting the visibility pragma 'hidden'. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-3-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The mixed mode refactor actually broke mixed mode by failing to pass the bootparam structure to startup_32(). This went unnoticed because it apparently has a high tolerance for being passed random junk, and still boots fine in some cases. So let's fix this by populating %esi as required when entering via efi32_stub_entry, and while at it, preserve the arguments themselves instead of their address in memory (via the stack pointer) since that memory could be clobbered before we get to it. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <mjg59@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-2-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 1月, 2020 1 次提交
-
-
由 David Hildenbrand 提交于
We currently try to shrink a single zone when removing memory. We use the zone of the first page of the memory we are removing. If that memmap was never initialized (e.g., memory was never onlined), we will read garbage and can trigger kernel BUGs (due to a stale pointer): BUG: unable to handle page fault for address: 000000000000353d #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 1 PID: 7 Comm: kworker/u8:0 Not tainted 5.3.0-rc5-next-20190820+ #317 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.4 Workqueue: kacpi_hotplug acpi_hotplug_work_fn RIP: 0010:clear_zone_contiguous+0x5/0x10 Code: 48 89 c6 48 89 c3 e8 2a fe ff ff 48 85 c0 75 cf 5b 5d c3 c6 85 fd 05 00 00 01 5b 5d c3 0f 1f 840 RSP: 0018:ffffad2400043c98 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000200000000 RCX: 0000000000000000 RDX: 0000000000200000 RSI: 0000000000140000 RDI: 0000000000002f40 RBP: 0000000140000000 R08: 0000000000000000 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000140000 R13: 0000000000140000 R14: 0000000000002f40 R15: ffff9e3e7aff3680 FS: 0000000000000000(0000) GS:ffff9e3e7bb00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000353d CR3: 0000000058610000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __remove_pages+0x4b/0x640 arch_remove_memory+0x63/0x8d try_remove_memory+0xdb/0x130 __remove_memory+0xa/0x11 acpi_memory_device_remove+0x70/0x100 acpi_bus_trim+0x55/0x90 acpi_device_hotplug+0x227/0x3a0 acpi_hotplug_work_fn+0x1a/0x30 process_one_work+0x221/0x550 worker_thread+0x50/0x3b0 kthread+0x105/0x140 ret_from_fork+0x3a/0x50 Modules linked in: CR2: 000000000000353d Instead, shrink the zones when offlining memory or when onlining failed. Introduce and use remove_pfn_range_from_zone(() for that. We now properly shrink the zones, even if we have DIMMs whereby - Some memory blocks fall into no zone (never onlined) - Some memory blocks fall into multiple zones (offlined+re-onlined) - Multiple memory blocks that fall into different zones Drop the zone parameter (with a potential dubious value) from __remove_pages() and __remove_section(). Link: http://lkml.kernel.org/r/20191006085646.5768-6-david@redhat.com Fixes: f1dd2cd1 ("mm, memory_hotplug: do not associate hotadded memory to zones until online") [visible after d0dc12e8] Signed-off-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NOscar Salvador <osalvador@suse.de> Cc: Michal Hocko <mhocko@suse.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: <stable@vger.kernel.org> [5.0+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 12月, 2019 20 次提交
-
-
由 Ard Biesheuvel 提交于
Instead of storing the return address in a global variable when calling a 32-bit EFI service from the 64-bit stub, avoid the indirection via efi_exit32, and take the return address from the stack. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-26-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The macros efi_call_early and efi_call_runtime are used to call EFI boot services and runtime services, respectively. However, the naming is confusing, given that the early vs runtime distinction may suggest that these are used for calling the same set of services either early or late (== at runtime), while in reality, the sets of services they can be used with are completely disjoint, and efi_call_runtime is also only usable in 'early' code. So do a global sweep to replace all occurrences with efi_bs_call or efi_rt_call, respectively, where BS and RT match the idiom used by the UEFI spec to refer to boot time or runtime services. While at it, use 'func' as the macro parameter name for the function pointers, which is less likely to collide and cause weird build errors. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-24-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
None of the definitions of the efi_table_attr() still refer to their 'table' argument so let's get rid of it entirely. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-23-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
After refactoring the mixed mode support code, efi_call_proto() no longer uses its protocol argument in any of its implementation, so let's remove it altogether. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-22-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Mixed mode translates calls from the 64-bit kernel into the 32-bit firmware by wrapping them in a call to a thunking routine that pushes a 32-bit word onto the stack for each argument passed to the function, regardless of the argument type. This works surprisingly well for most services and protocols, with the exception of ones that take explicit 64-bit arguments. efi_free() invokes the FreePages() EFI boot service, which takes a efi_physical_addr_t as its address argument, and this is one of those 64-bit types. This means that the 32-bit firmware will interpret the (addr, size) pair as a single 64-bit quantity, and since it is guaranteed to have the high word set (as size > 0), it will always fail due to the fact that EFI memory allocations are always < 4 GB on 32-bit firmware. So let's fix this by giving the thunking code a little hand, and pass two values for the address, and a third one for the size. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-21-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
We have a helper efi_system_table() that gives us the address of the EFI system table in memory, so there is no longer point in passing it around from each function to the next. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-20-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
As a first step towards getting rid of the need to pass around a function parameter 'sys_table_arg' pointing to the EFI system table, remove the references to it in the printing code, which is represents the majority of the use cases. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-19-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The various pointers we stash in the efi_config struct which we retrieve using __efi_early() are simply copies of the ones in the EFI system table, which we have started accessing directly in the previous patch. So drop all the __efi_early() related plumbing, as well as all the assembly code dealing with efi_config, which allows us to move the PE/COFF entry point to C code as well. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-18-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Use a single implementation for efi_char16_printk() across all architectures. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-17-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The efi_call macros on ARM have a dependency on a variable 'sys_table_arg' existing in the scope of the macro instantiation. Since this variable always points to the same data structure, let's create a global getter for it and use that instead. Note that the use of a global variable with external linkage is avoided, given the problems we had in the past with early processing of the GOT tables. While at it, drop the redundant casts in the efi_table_attr and efi_call_proto macros. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-16-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
We use special wrapper routines to invoke firmware services in the native case as well as the mixed mode case. For mixed mode, the need is obvious, but for the native cases, we can simply rely on the compiler to generate the indirect call, given that GCC now has support for the MS calling convention (and has had it for quite some time now). Note that on i386, the decompressor and the EFI stub are not built with -mregparm=3 like the rest of the i386 kernel, so we can safely allow the compiler to emit the indirect calls here as well. So drop all the wrappers and indirection, and switch to either native calls, or direct calls into the thunk routine for mixed mode. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-14-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Annotate all the firmware routines (boot services, runtime services and protocol methods) called in the boot context as __efiapi, and make it expand to __attribute__((ms_abi)) on 64-bit x86. This allows us to use the compiler to generate the calls into firmware that use the MS calling convention instead of the SysV one. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-13-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
We will soon remove another level of pointer casting, so let's make sure all type handling involving firmware calls at boot time is correct. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-12-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Now that we have incorporated the mixed mode protocol definitions into the native ones using unions, we no longer need the separate 32/64 bit struct definitions, with the exception of the EFI system table definition and the boot services, runtime services and configuration table definitions. So drop the unused ones. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-11-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Currently, we support mixed mode by casting all boot time firmware calls to 64-bit explicitly on native 64-bit systems, and to 32-bit on 32-bit systems or 64-bit systems running with 32-bit firmware. Due to this explicit awareness of the bitness in the code, we do a lot of casting even on generic code that is shared with other architectures, where mixed mode does not even exist. This casting leads to loss of coverage of type checking by the compiler, which we should try to avoid. So instead of distinguishing between 32-bit vs 64-bit, distinguish between native vs mixed, and limit all the nasty casting and pointer mangling to the code that actually deals with mixed mode. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-10-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
In preparation of moving to a native vs. mixed mode split rather than a 32 vs. 64 bit split when it comes to invoking EFI firmware services, update all the native protocol definitions and redefine them as unions containing an anonymous struct for the native view and a struct called 'mixed_mode' describing the 32-bit view of the protocol when called from 64-bit code. While at it, flesh out some PCI I/O member definitions that we will be needing shortly. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-9-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
Iterating over a EFI handle array is a bit finicky, since we have to take mixed mode into account, where handles are only 32-bit while the native efi_handle_t type is 64-bit. So introduce a helper, and replace the various occurrences of this pattern. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-8-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The ARM architecture does not permit combining 32-bit and 64-bit code at the same privilege level, and so EFI mixed mode is strictly a x86 concept. In preparation of turning the 32/64 bit distinction in shared stub code to a native vs mixed one, refactor x86's current use of the helper function efi_is_native() into efi_is_mixed(). Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-7-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The macro __efi_call_early() is defined by various architectures but never used. Let's get rid of it. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Borislav Petkov <bp@alien8.de> Cc: James Morse <james.morse@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224151025.32482-6-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ard Biesheuvel 提交于
The EFI mixed mode entry code goes through the ordinary startup_32() routine before jumping into the kernel's EFI boot code in 64-bit mode. The 32-bit startup code must be entered with paging disabled, but this is not documented as a requirement for the EFI handover protocol, and so we should disable paging explicitly when entering the kernel from 32-bit EFI firmware. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: <stable@vger.kernel.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20191224132909.102540-4-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 12月, 2019 2 次提交
-
-
由 Jim Mattson 提交于
The host reports support for the synthetic feature X86_FEATURE_SSBD when any of the three following hardware features are set: CPUID.(EAX=7,ECX=0):EDX.SSBD[bit 31] CPUID.80000008H:EBX.AMD_SSBD[bit 24] CPUID.80000008H:EBX.VIRT_SSBD[bit 25] Either of the first two hardware features implies the existence of the IA32_SPEC_CTRL MSR, but CPUID.80000008H:EBX.VIRT_SSBD[bit 25] does not. Therefore, CPUID.80000008H:EBX.AMD_SSBD[bit 24] should only be set in the guest if CPUID.(EAX=7,ECX=0):EDX.SSBD[bit 31] or CPUID.80000008H:EBX.AMD_SSBD[bit 24] is set on the host. Fixes: 4c6903a0 ("KVM: x86: fix reporting of AMD speculation bug CPUID leaf") Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NJacob Xu <jacobhxu@google.com> Reviewed-by: NPeter Shier <pshier@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Reported-by: NEric Biggers <ebiggers@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jim Mattson 提交于
The host reports support for the synthetic feature X86_FEATURE_SSBD when any of the three following hardware features are set: CPUID.(EAX=7,ECX=0):EDX.SSBD[bit 31] CPUID.80000008H:EBX.AMD_SSBD[bit 24] CPUID.80000008H:EBX.VIRT_SSBD[bit 25] Either of the first two hardware features implies the existence of the IA32_SPEC_CTRL MSR, but CPUID.80000008H:EBX.VIRT_SSBD[bit 25] does not. Therefore, CPUID.(EAX=7,ECX=0):EDX.SSBD[bit 31] should only be set in the guest if CPUID.(EAX=7,ECX=0):EDX.SSBD[bit 31] or CPUID.80000008H:EBX.AMD_SSBD[bit 24] is set on the host. Fixes: 0c54914d ("KVM: x86: use Intel speculation bugs and features as derived in generic x86 code") Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NJacob Xu <jacobhxu@google.com> Reviewed-by: NPeter Shier <pshier@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Reported-by: NEric Biggers <ebiggers@kernel.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 17 12月, 2019 3 次提交
-
-
由 Alexander Shishkin 提交于
Commit: ccbebba4 ("perf/x86/intel/pt: Bypass PT vs. LBR exclusivity if the core supports it") skips the PT/LBR exclusivity check on CPUs where PT and LBRs coexist, but also inadvertently skips the active_events bump for PT in that case, which is a bug. If there aren't any hardware events at the same time as PT, the PMI handler will ignore PT PMIs, as active_events reads zero in that case, resulting in the "Uhhuh" spurious NMI warning and PT data loss. Fix this by always increasing active_events for PT events. Fixes: ccbebba4 ("perf/x86/intel/pt: Bypass PT vs. LBR exclusivity if the core supports it") Reported-by: NVitaly Slobodskoy <vitaly.slobodskoy@intel.com> Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NAlexey Budankov <alexey.budankov@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lkml.kernel.org/r/20191210105101.77210-1-alexander.shishkin@linux.intel.com
-
由 Alexander Shishkin 提交于
Commit 8062382c ("perf/x86/intel/bts: Add BTS PMU driver") brought in a warning with the BTS buffer initialization that is easily tripped with (assuming KPTI is disabled): instantly throwing: > ------------[ cut here ]------------ > WARNING: CPU: 2 PID: 326 at arch/x86/events/intel/bts.c:86 bts_buffer_setup_aux+0x117/0x3d0 > Modules linked in: > CPU: 2 PID: 326 Comm: perf Not tainted 5.4.0-rc8-00291-gceb9e773 #904 > RIP: 0010:bts_buffer_setup_aux+0x117/0x3d0 > Call Trace: > rb_alloc_aux+0x339/0x550 > perf_mmap+0x607/0xc70 > mmap_region+0x76b/0xbd0 ... It appears to assume (for lost raisins) that PagePrivate() is set, while later it actually tests for PagePrivate() before using page_private(). Make it consistent and always check PagePrivate() before using page_private(). Fixes: 8062382c ("perf/x86/intel/bts: Add BTS PMU driver") Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lkml.kernel.org/r/20191205142853.28894-2-alexander.shishkin@linux.intel.com
-
由 Peter Zijlstra 提交于
UBSAN reported out-of-bound accesses for x86_pmu.event_map(), it's arguments should be < x86_pmu.max_events. Make sure all users observe this constraint. Reported-by: NMeelis Roos <mroos@linux.ee> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NMeelis Roos <mroos@linux.ee>
-