1. 07 2月, 2017 1 次提交
  2. 13 11月, 2016 1 次提交
    • L
      efi: Allow bitness-agnostic protocol calls · 3552fdf2
      Lukas Wunner 提交于
      We already have a macro to invoke boot services which on x86 adapts
      automatically to the bitness of the EFI firmware:  efi_call_early().
      
      The macro allows sharing of functions across arches and bitness variants
      as long as those functions only call boot services.  However in practice
      functions in the EFI stub contain a mix of boot services calls and
      protocol calls.
      
      Add an efi_call_proto() macro for bitness-agnostic protocol calls to
      allow sharing more code across arches as well as deduplicating 32 bit
      and 64 bit code paths.
      
      On x86, implement it using a new efi_table_attr() macro for bitness-
      agnostic table lookups.  Refactor efi_call_early() to make use of the
      same macro.  (The resulting object code remains identical.)
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Andreas Noever <andreas.noever@gmail.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20161112213237.8804-8-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3552fdf2
  3. 09 9月, 2016 3 次提交
    • L
      x86/efi: Allow invocation of arbitrary boot services · 0a637ee6
      Lukas Wunner 提交于
      We currently allow invocation of 8 boot services with efi_call_early().
      Not included are LocateHandleBuffer and LocateProtocol in particular.
      For graphics output or to retrieve PCI ROMs and Apple device properties,
      we're thus forced to use the LocateHandle + AllocatePool + LocateHandle
      combo, which is cumbersome and needs more code.
      
      The ARM folks allow invocation of the full set of boot services but are
      restricted to our 8 boot services in functions shared across arches.
      
      Thus, rather than adding just LocateHandleBuffer and LocateProtocol to
      struct efi_config, let's rework efi_call_early() to allow invocation of
      arbitrary boot services by selecting the 64 bit vs 32 bit code path in
      the macro itself.
      
      When compiling for 32 bit or for 64 bit without mixed mode, the unused
      code path is optimized away and the binary code is the same as before.
      But on 64 bit with mixed mode enabled, this commit adds one compare
      instruction to each invocation of a boot service and, depending on the
      code path selected, two jump instructions. (Most of the time gcc
      arranges the jumps in the 32 bit code path.) The result is a minuscule
      performance penalty and the binary code becomes slightly larger and more
      difficult to read when disassembled. This isn't a hot path, so these
      drawbacks are arguably outweighed by the attainable simplification of
      the C code. We have some overhead anyway for thunking or conversion
      between calling conventions.
      
      The 8 boot services can consequently be removed from struct efi_config.
      
      No functional change intended (for now).
      
      Example -- invocation of free_pool before (64 bit code path):
      0x2d4      movq  %ds:efi_early, %rdx          ; efi_early
      0x2db      movq  %ss:arg_0-0x20(%rsp), %rsi
      0x2e0      xorl  %eax, %eax
      0x2e2      movq  %ds:0x28(%rdx), %rdi         ; efi_early->free_pool
      0x2e6      callq *%ds:0x58(%rdx)              ; efi_early->call()
      
      Example -- invocation of free_pool after (64 / 32 bit mixed code path):
      0x0dc      movq  %ds:efi_early, %rax          ; efi_early
      0x0e3      cmpb  $0, %ds:0x28(%rax)           ; !efi_early->is64 ?
      0x0e7      movq  %ds:0x20(%rax), %rdx         ; efi_early->call()
      0x0eb      movq  %ds:0x10(%rax), %rax         ; efi_early->boot_services
      0x0ef      je    $0x150
      0x0f1      movq  %ds:0x48(%rax), %rdi         ; free_pool (64 bit)
      0x0f5      xorl  %eax, %eax
      0x0f7      callq *%rdx
      ...
      0x150      movl  %ds:0x30(%rax), %edi         ; free_pool (32 bit)
      0x153      jmp   $0x0f5
      
      Size of eboot.o text section:
      CONFIG_X86_32:                         6464 before, 6318 after
      CONFIG_X86_64 && !CONFIG_EFI_MIXED:    7670 before, 7573 after
      CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    7670 before, 8319 after
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      0a637ee6
    • L
      x86/efi: Optimize away setup_gop32/64 if unused · 27571616
      Lukas Wunner 提交于
      Commit 2c23b73c ("x86/efi: Prepare GOP handling code for reuse
      as generic code") introduced an efi_is_64bit() macro to x86 which
      previously only existed for arm arches.  The macro is used to
      choose between the 64 bit or 32 bit code path in gop.c at runtime.
      
      However the code path that's going to be taken is known at compile
      time when compiling for x86_32 or for x86_64 with mixed mode disabled.
      Amend the macro to eliminate the unused code path in those cases.
      
      Size of gop.o text section:
      CONFIG_X86_32:                         1758 before, 1299 after
      CONFIG_X86_64 && !CONFIG_EFI_MIXED:    2201 before, 1406 after
      CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    2201 before and after
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      27571616
    • M
      efi: Refactor efi_memmap_init_early() into arch-neutral code · 9479c7ce
      Matt Fleming 提交于
      Every EFI architecture apart from ia64 needs to setup the EFI memory
      map at efi.memmap, and the code for doing that is essentially the same
      across all implementations. Therefore, it makes sense to factor this
      out into the common code under drivers/firmware/efi/.
      
      The only slight variation is the data structure out of which we pull
      the initial memory map information, such as physical address, memory
      descriptor size and version, etc. We can address this by passing a
      generic data structure (struct efi_memory_map_data) as the argument to
      efi_memmap_init_early() which contains the minimum info required for
      initialising the memory map.
      
      In the process, this patch also fixes a few undesirable implementation
      differences:
      
       - ARM and arm64 were failing to clear the EFI_MEMMAP bit when
         unmapping the early EFI memory map. EFI_MEMMAP indicates whether
         the EFI memory map is mapped (not the regions contained within) and
         can be traversed.  It's more correct to set the bit as soon as we
         memremap() the passed in EFI memmap.
      
       - Rename efi_unmmap_memmap() to efi_memmap_unmap() to adhere to the
         regular naming scheme.
      
      This patch also uses a read-write mapping for the memory map instead
      of the read-only mapping currently used on ARM and arm64. x86 needs
      the ability to update the memory map in-place when assigning virtual
      addresses to regions (efi_map_region()) and tagging regions when
      reserving boot services (efi_reserve_boot_services()).
      
      There's no way for the generic fake_mem code to know which mapping to
      use without introducing some arch-specific constant/hook, so just use
      read-write since read-only is of dubious value for the EFI memory map.
      
      Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
      Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      9479c7ce
  4. 15 7月, 2016 1 次提交
  5. 27 6月, 2016 1 次提交
    • A
      efi: Convert efi_call_virt() to efi_call_virt_pointer() · 80e75596
      Alex Thorlton 提交于
      This commit makes a few slight modifications to the efi_call_virt() macro
      to get it to work with function pointers that are stored in locations
      other than efi.systab->runtime, and renames the macro to
      efi_call_virt_pointer().  The majority of the changes here are to pull
      these macros up into header files so that they can be accessed from
      outside of drivers/firmware/efi/runtime-wrappers.c.
      
      The most significant change not directly related to the code move is to
      add an extra "p" argument into the appropriate efi_call macros, and use
      that new argument in place of the, formerly hard-coded,
      efi.systab->runtime pointer.
      
      The last piece of the puzzle was to add an efi_call_virt() macro back into
      drivers/firmware/efi/runtime-wrappers.c to wrap around the new
      efi_call_virt_pointer() macro - this was mainly to keep the code from
      looking too cluttered by adding a bunch of extra references to
      efi.systab->runtime everywhere.
      
      Note that I also broke up the code in the efi_call_virt_pointer() macro a
      bit in the process of moving it.
      Signed-off-by: NAlex Thorlton <athorlton@sgi.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roy Franz <roy.franz@linaro.org>
      Cc: Russ Anderson <rja@sgi.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1466839230-12781-5-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      80e75596
  6. 28 4月, 2016 4 次提交
  7. 24 2月, 2016 1 次提交
  8. 22 2月, 2016 1 次提交
    • S
      x86/efi: Map EFI_MEMORY_{XP,RO} memory region bits to EFI page tables · 6d0cc887
      Sai Praneeth 提交于
      Now that we have EFI memory region bits that indicate which regions do
      not need execute permission or read/write permission in the page tables,
      let's use them.
      
      We also check for EFI_NX_PE_DATA and only enforce the restrictive
      mappings if it's present (to allow us to ignore buggy firmware that sets
      bits it didn't mean to and to preserve backwards compatibility).
      
      Instead of assuming that firmware would set appropriate attributes in
      memory descriptor like EFI_MEMORY_RO for code and EFI_MEMORY_XP for
      data, we can expect some firmware out there which might only set *type*
      in memory descriptor to be EFI_RUNTIME_SERVICES_CODE or
      EFI_RUNTIME_SERVICES_DATA leaving away attribute. This will lead to
      improper mappings of EFI runtime regions. In order to avoid it, we check
      attribute and type of memory descriptor to update mappings and moreover
      Windows works this way.
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Lee, Chun-Yi <jlee@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: Ricardo Neri <ricardo.neri@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1455712566-16727-13-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6d0cc887
  9. 29 11月, 2015 2 次提交
    • M
      x86/efi: Build our own page table structures · 67a9108e
      Matt Fleming 提交于
      With commit e1a58320 ("x86/mm: Warn on W^X mappings") all
      users booting on 64-bit UEFI machines see the following warning,
      
        ------------[ cut here ]------------
        WARNING: CPU: 7 PID: 1 at arch/x86/mm/dump_pagetables.c:225 note_page+0x5dc/0x780()
        x86/mm: Found insecure W+X mapping at address ffff88000005f000/0xffff88000005f000
        ...
        x86/mm: Checked W+X mappings: FAILED, 165660 W+X pages found.
        ...
      
      This is caused by mapping EFI regions with RWX permissions.
      There isn't much we can do to restrict the permissions for these
      regions due to the way the firmware toolchains mix code and
      data, but we can at least isolate these mappings so that they do
      not appear in the regular kernel page tables.
      
      In commit d2f7cbe7 ("x86/efi: Runtime services virtual
      mapping") we started using 'trampoline_pgd' to map the EFI
      regions because there was an existing identity mapping there
      which we use during the SetVirtualAddressMap() call and for
      broken firmware that accesses those addresses.
      
      But 'trampoline_pgd' shares some PGD entries with
      'swapper_pg_dir' and does not provide the isolation we require.
      Notably the virtual address for __START_KERNEL_map and
      MODULES_START are mapped by the same PGD entry so we need to be
      more careful when copying changes over in
      efi_sync_low_kernel_mappings().
      
      This patch doesn't go the full mile, we still want to share some
      PGD entries with 'swapper_pg_dir'. Having completely separate
      page tables brings its own issues such as synchronising new
      mappings after memory hotplug and module loading. Sharing also
      keeps memory usage down.
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1448658575-17029-6-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      67a9108e
    • M
      x86/efi: Hoist page table switching code into efi_call_virt() · c9f2a9a6
      Matt Fleming 提交于
      This change is a prerequisite for pending patches that switch to
      a dedicated EFI page table, instead of using 'trampoline_pgd'
      which shares PGD entries with 'swapper_pg_dir'. The pending
      patches make it impossible to dereference the runtime service
      function pointer without first switching %cr3.
      
      It's true that we now have duplicated switching code in
      efi_call_virt() and efi_call_phys_{prolog,epilog}() but we are
      sacrificing code duplication for a little more clarity and the
      ease of writing the page table switching code in C instead of
      asm.
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1448658575-17029-5-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c9f2a9a6
  10. 12 10月, 2015 1 次提交
  11. 02 10月, 2015 1 次提交
  12. 30 9月, 2015 1 次提交
  13. 23 9月, 2015 1 次提交
    • A
      x86, efi, kasan: #undef memset/memcpy/memmove per arch · 769a8089
      Andrey Ryabinin 提交于
      In not-instrumented code KASAN replaces instrumented memset/memcpy/memmove
      with not-instrumented analogues __memset/__memcpy/__memove.
      
      However, on x86 the EFI stub is not linked with the kernel.  It uses
      not-instrumented mem*() functions from arch/x86/boot/compressed/string.c
      
      So we don't replace them with __mem*() variants in EFI stub.
      
      On ARM64 the EFI stub is linked with the kernel, so we should replace
      mem*() functions with __mem*(), because the EFI stub runs before KASAN
      sets up early shadow.
      
      So let's move these #undef mem* into arch's asm/efi.h which is also
      included by the EFI stub.
      
      Also, this will fix the warning in 32-bit build reported by kbuild test
      robot:
      
      	efi-stub-helper.c:599:2: warning: implicit declaration of function 'memcpy'
      
      [akpm@linux-foundation.org: use 80 cols in comment]
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Reported-by: NFengguang Wu <fengguang.wu@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      769a8089
  14. 19 5月, 2015 1 次提交
    • I
      x86/fpu: Rename i387.h to fpu/api.h · df6b35f4
      Ingo Molnar 提交于
      We already have fpu/types.h, move i387.h to fpu/api.h.
      
      The file name has become a misnomer anyway: it offers generic FPU APIs,
      but is not limited to i387 functionality.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      df6b35f4
  15. 01 4月, 2015 1 次提交
    • I
      efi: Clean up the efi_call_phys_[prolog|epilog]() save/restore interaction · 744937b0
      Ingo Molnar 提交于
      Currently x86-64 efi_call_phys_prolog() saves into a global variable (save_pgd),
      and efi_call_phys_epilog() restores the kernel pagetables from that global
      variable.
      
      Change this to a cleaner save/restore pattern where the saving function returns
      the saved object and the restore function restores that.
      
      Apply the same concept to the 32-bit code as well.
      
      Plus this approach, as an added bonus, allows us to express the
      !efi_enabled(EFI_OLD_MEMMAP) situation in a clean fashion as well,
      via a 'NULL' return value.
      
      Cc: Tapasweni Pathak <tapaswenipathak@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      744937b0
  16. 12 11月, 2014 1 次提交
    • A
      efi/x86: Move x86 back to libstub · 243b6754
      Ard Biesheuvel 提交于
      This reverts commit 84be8805, which itself reverted my original
      attempt to move x86 from #include'ing .c files from across the tree
      to using the EFI stub built as a static library.
      
      The issue that affected the original approach was that splitting
      the implementation into several .o files resulted in the variable
      'efi_early' becoming a global with external linkage, which under
      -fPIC implies that references to it must go through the GOT. However,
      dealing with this additional GOT entry turned out to be troublesome
      on some EFI implementations. (GCC's visibility=hidden attribute is
      supposed to lift this requirement, but it turned out not to work on
      the 32-bit build.)
      
      Instead, use a pure getter function to get a reference to efi_early.
      This approach results in no additional GOT entries being generated,
      so there is no need for any changes in the early GOT handling.
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      243b6754
  17. 04 10月, 2014 5 次提交
    • M
      efi: Delete the in_nmi() conditional runtime locking · 60b4dc77
      Matt Fleming 提交于
      commit 5dc3826d9f08 ("efi: Implement mandatory locking for UEFI Runtime
      Services") implemented some conditional locking when accessing variable
      runtime services that Ingo described as "pretty disgusting".
      
      The intention with the !efi_in_nmi() checks was to avoid live-locks when
      trying to write pstore crash data into an EFI variable. Such lockless
      accesses are allowed according to the UEFI specification when we're in a
      "non-recoverable" state, but whether or not things are implemented
      correctly in actual firmware implementations remains an unanswered
      question, and so it would seem sensible to avoid doing any kind of
      unsynchronized variable accesses.
      
      Furthermore, the efi_in_nmi() tests are inadequate because they don't
      account for the case where we call EFI variable services from panic or
      oops callbacks and aren't executing in NMI context. In other words,
      live-locking is still possible.
      
      Let's just remove the conditional locking altogether. Now we've got the
      ->set_variable_nonblocking() EFI variable operation we can abort if the
      runtime lock is already held. Aborting is by far the safest option.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      60b4dc77
    • M
      x86/efi: Mark initialization code as such · 4e78eb05
      Mathias Krause 提交于
      The 32 bit and 64 bit implementations differ in their __init annotations
      for some functions referenced from the common EFI code. Namely, the 32
      bit variant is missing some of the __init annotations the 64 bit variant
      has.
      
      To solve the colliding annotations, mark the corresponding functions in
      efi_32.c as initialization code, too -- as it is such.
      
      Actually, quite a few more functions are only used during initialization
      and therefore can be marked __init. They are therefore annotated, too.
      Also add the __init annotation to the prototypes in the efi.h header so
      users of those functions will see it's meant as initialization code
      only.
      
      This patch also fixes the "prelog" typo. ("prologue" / "epilogue" might
      be more appropriate but this is C code after all, not an opera! :D)
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      4e78eb05
    • M
      x86/efi: Unexport add_efi_memmap variable · 60920685
      Mathias Krause 提交于
      This variable was accidentally exported, even though it's only used in
      this compilation unit and only during initialization.
      
      Remove the bogus export, make the variable static instead and mark it
      as __initdata.
      
      Fixes: 200001eb ("x86 boot: only pick up additional EFI memmap...")
      Cc: Paul Jackson <pj@sgi.com>
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      60920685
    • M
      x86/efi: Remove unused efi_call* macros · 24ffd84b
      Mathias Krause 提交于
      Complement commit 62fa6e69 ("x86/efi: Delete most of the efi_call*
      macros") and delete the stub macros for the !CONFIG_EFI case, too. In
      fact, there are no EFI calls in this case so we don't need a dummy for
      efi_call() even.
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      24ffd84b
    • A
      efi: Implement mandatory locking for UEFI Runtime Services · 161485e8
      Ard Biesheuvel 提交于
      According to section 7.1 of the UEFI spec, Runtime Services are not fully
      reentrant, and there are particular combinations of calls that need to be
      serialized. Use a spinlock to serialize all Runtime Services with respect
      to all others, even if this is more than strictly needed.
      
      We've managed to get away without requiring a runtime services lock
      until now because most of the interactions with EFI involve EFI
      variables, and those operations are already serialised with
      __efivars->lock.
      
      Some of the assumptions underlying the decision whether locks are
      needed or not (e.g., SetVariable() against ResetSystem()) may not
      apply universally to all [new] architectures that implement UEFI.
      Rather than try to reason our way out of this, let's just implement at
      least what the spec requires in terms of locking.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      161485e8
  18. 24 9月, 2014 1 次提交
    • M
      Revert "efi/x86: efistub: Move shared dependencies to <asm/efi.h>" · 84be8805
      Matt Fleming 提交于
      This reverts commit f23cf8bd ("efi/x86: efistub: Move shared
      dependencies to <asm/efi.h>") as well as the x86 parts of commit
      f4f75ad5 ("efi: efistub: Convert into static library").
      
      The road leading to these two reverts is long and winding.
      
      The above two commits were merged during the v3.17 merge window and
      turned the common EFI boot stub code into a static library. This
      necessitated making some symbols global in the x86 boot stub which
      introduced new entries into the early boot GOT.
      
      The problem was that we weren't fixing up the newly created GOT entries
      before invoking the EFI boot stub, which sometimes resulted in hangs or
      resets. This failure was reported by Maarten on his Macbook pro.
      
      The proposed fix was commit 9cb0e394 ("x86/efi: Fixup GOT in all
      boot code paths"). However, that caused issues for Linus when booting
      his Sony Vaio Pro 11. It was subsequently reverted in commit
      f3670394.
      
      So that leaves us back with Maarten's Macbook pro not booting.
      
      At this stage in the release cycle the least risky option is to revert
      the x86 EFI boot stub to the pre-merge window code structure where we
      explicitly #include efi-stub-helper.c instead of linking with the static
      library. The arm64 code remains unaffected.
      
      We can take another swing at the x86 parts for v3.18.
      
      Conflicts:
      	arch/x86/include/asm/efi.h
      Tested-by: NJosh Boyer <jwboyer@fedoraproject.org>
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Tested-by: Leif Lindholm <leif.lindholm@linaro.org> [arm64]
      Tested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      84be8805
  19. 19 7月, 2014 1 次提交
    • M
      x86/reboot: Add EFI reboot quirk for ACPI Hardware Reduced flag · 44be28e9
      Matt Fleming 提交于
      It appears that the BayTrail-T class of hardware requires EFI in order
      to powerdown and reboot and no other reliable method exists.
      
      This quirk is generally applicable to all hardware that has the ACPI
      Hardware Reduced bit set, since usually ACPI would be the preferred
      method.
      
      Cc: Len Brown <len.brown@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      44be28e9
  20. 08 7月, 2014 1 次提交
  21. 19 6月, 2014 1 次提交
  22. 17 4月, 2014 4 次提交
  23. 05 3月, 2014 5 次提交
    • B
      x86/efi: Quirk out SGI UV · a5d90c92
      Borislav Petkov 提交于
      Alex reported hitting the following BUG after the EFI 1:1 virtual
      mapping work was merged,
      
       kernel BUG at arch/x86/mm/init_64.c:351!
       invalid opcode: 0000 [#1] SMP
       Call Trace:
        [<ffffffff818aa71d>] init_extra_mapping_uc+0x13/0x15
        [<ffffffff818a5e20>] uv_system_init+0x22b/0x124b
        [<ffffffff8108b886>] ? clockevents_register_device+0x138/0x13d
        [<ffffffff81028dbb>] ? setup_APIC_timer+0xc5/0xc7
        [<ffffffff8108b620>] ? clockevent_delta2ns+0xb/0xd
        [<ffffffff818a3a92>] ? setup_boot_APIC_clock+0x4a8/0x4b7
        [<ffffffff8153d955>] ? printk+0x72/0x74
        [<ffffffff818a1757>] native_smp_prepare_cpus+0x389/0x3d6
        [<ffffffff818957bc>] kernel_init_freeable+0xb7/0x1fb
        [<ffffffff81535530>] ? rest_init+0x74/0x74
        [<ffffffff81535539>] kernel_init+0x9/0xff
        [<ffffffff81541dfc>] ret_from_fork+0x7c/0xb0
        [<ffffffff81535530>] ? rest_init+0x74/0x74
      
      Getting this thing to work with the new mapping scheme would need more
      work, so automatically switch to the old memmap layout for SGI UV.
      Acked-by: NRuss Anderson <rja@sgi.com>
      Cc: Alex Thorlton <athorlton@sgi.com
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      a5d90c92
    • M
      x86/efi: Wire up CONFIG_EFI_MIXED · 7d453eee
      Matt Fleming 提交于
      Add the Kconfig option and bump the kernel header version so that boot
      loaders can check whether the handover code is available if they want.
      
      The xloadflags field in the bzImage header is also updated to reflect
      that the kernel supports both entry points by setting both of
      XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y.
      XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is
      guaranteed to be addressable with 32-bits.
      
      Note that no boot loaders should be using the bits set in xloadflags to
      decide which entry point to jump to. The entire scheme is based on the
      concept that 32-bit bootloaders always jump to ->handover_offset and
      64-bit loaders always jump to ->handover_offset + 512. We set both bits
      merely to inform the boot loader that it's safe to use the native
      handover offset even if the machine type in the PE/COFF header claims
      otherwise.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7d453eee
    • M
      x86/efi: Add mixed runtime services support · 4f9dbcfc
      Matt Fleming 提交于
      Setup the runtime services based on whether we're booting in EFI native
      mode or not. For non-native mode we need to thunk from 64-bit into
      32-bit mode before invoking the EFI runtime services.
      
      Using the runtime services after SetVirtualAddressMap() is slightly more
      complicated because we need to ensure that all the addresses we pass to
      the firmware are below the 4GB boundary so that they can be addressed
      with 32-bit pointers, see efi_setup_page_tables().
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      4f9dbcfc
    • M
      x86/efi: Firmware agnostic handover entry points · b8ff87a6
      Matt Fleming 提交于
      The EFI handover code only works if the "bitness" of the firmware and
      the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not
      possible to mix the two. This goes against the tradition that a 32-bit
      kernel can be loaded on a 64-bit BIOS platform without having to do
      anything special in the boot loader. Linux distributions, for one thing,
      regularly run only 32-bit kernels on their live media.
      
      Despite having only one 'handover_offset' field in the kernel header,
      EFI boot loaders use two separate entry points to enter the kernel based
      on the architecture the boot loader was compiled for,
      
          (1) 32-bit loader: handover_offset
          (2) 64-bit loader: handover_offset + 512
      
      Since we already have two entry points, we can leverage them to infer
      the bitness of the firmware we're running on, without requiring any boot
      loader modifications, by making (1) and (2) valid entry points for both
      CONFIG_X86_32 and CONFIG_X86_64 kernels.
      
      To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot
      loader will always use (2). It's just that, if a single kernel image
      supports (1) and (2) that image can be used with both 32-bit and 64-bit
      boot loaders, and hence both 32-bit and 64-bit EFI.
      
      (1) and (2) must be 512 bytes apart at all times, but that is already
      part of the boot ABI and we could never change that delta without
      breaking existing boot loaders anyhow.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      b8ff87a6
    • B
      x86/efi: Make efi virtual runtime map passing more robust · b7b898ae
      Borislav Petkov 提交于
      Currently, running SetVirtualAddressMap() and passing the physical
      address of the virtual map array was working only by a lucky coincidence
      because the memory was present in the EFI page table too. Until Toshi
      went and booted this on a big HP box - the krealloc() manner of resizing
      the memmap we're doing did allocate from such physical addresses which
      were not mapped anymore and boom:
      
      http://lkml.kernel.org/r/1386806463.1791.295.camel@misato.fc.hp.com
      
      One way to take care of that issue is to reimplement the krealloc thing
      but with pages. We start with contiguous pages of order 1, i.e. 2 pages,
      and when we deplete that memory (shouldn't happen all that often but you
      know firmware) we realloc the next power-of-two pages.
      
      Having the pages, it is much more handy and easy to map them into the
      EFI page table with the already existing mapping code which we're using
      for building the virtual mappings.
      
      Thanks to Toshi Kani and Matt for the great debugging help.
      Reported-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      b7b898ae