1. 12 10月, 2015 2 次提交
  2. 01 10月, 2015 1 次提交
    • A
      arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions · 0ce3cc00
      Ard Biesheuvel 提交于
      The new Properties Table feature introduced in UEFIv2.5 may
      split memory regions that cover PE/COFF memory images into
      separate code and data regions. Since these regions only differ
      in the type (runtime code vs runtime data) and the permission
      bits, but not in the memory type attributes (UC/WC/WT/WB), the
      spec does not require them to be aligned to 64 KB.
      
      Since the relative offset of PE/COFF .text and .data segments
      cannot be changed on the fly, this means that we can no longer
      pad out those regions to be mappable using 64 KB pages.
      Unfortunately, there is no annotation in the UEFI memory map
      that identifies data regions that were split off from a code
      region, so we must apply this logic to all adjacent runtime
      regions whose attributes only differ in the permission bits.
      
      So instead of rounding each memory region to 64 KB alignment at
      both ends, only round down regions that are not directly
      preceded by another runtime region with the same type
      attributes. Since the UEFI spec does not mandate that the memory
      map be sorted, this means we also need to sort it first.
      
      Note that this change will result in all EFI_MEMORY_RUNTIME
      regions whose start addresses are not aligned to the OS page
      size to be mapped with executable permissions (i.e., on kernels
      compiled with 64 KB pages). However, since these mappings are
      only active during the time that UEFI Runtime Services are being
      invoked, the window for abuse is rather small.
      Tested-by: NMark Salter <msalter@redhat.com>
      Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Reviewed-by: NMark Salter <msalter@redhat.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org> # v4.0+
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0ce3cc00
  3. 23 9月, 2015 1 次提交
    • A
      x86, efi, kasan: #undef memset/memcpy/memmove per arch · 769a8089
      Andrey Ryabinin 提交于
      In not-instrumented code KASAN replaces instrumented memset/memcpy/memmove
      with not-instrumented analogues __memset/__memcpy/__memove.
      
      However, on x86 the EFI stub is not linked with the kernel.  It uses
      not-instrumented mem*() functions from arch/x86/boot/compressed/string.c
      
      So we don't replace them with __mem*() variants in EFI stub.
      
      On ARM64 the EFI stub is linked with the kernel, so we should replace
      mem*() functions with __mem*(), because the EFI stub runs before KASAN
      sets up early shadow.
      
      So let's move these #undef mem* into arch's asm/efi.h which is also
      included by the EFI stub.
      
      Also, this will fix the warning in 32-bit build reported by kbuild test
      robot:
      
      	efi-stub-helper.c:599:2: warning: implicit declaration of function 'memcpy'
      
      [akpm@linux-foundation.org: use 80 cols in comment]
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Reported-by: NFengguang Wu <fengguang.wu@gmail.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      769a8089
  4. 05 6月, 2015 1 次提交
  5. 01 4月, 2015 1 次提交
  6. 25 2月, 2015 1 次提交
    • Y
      efi/libstub: Fix boundary checking in efi_high_alloc() · 7ed620bb
      Yinghai Lu 提交于
      While adding support loading kernel and initrd above 4G to grub2 in legacy
      mode, I was referring to efi_high_alloc().
      That will allocate buffer for kernel and then initrd, and initrd will
      use kernel buffer start as limit.
      
      During testing found two buffers will be overlapped when initrd size is
      very big like 400M.
      
      It turns out efi_high_alloc() boundary checking is not right.
      end - size will be the new start, and should not compare new
      start with max, we need to make sure end is smaller than max.
      
      [ Basically, with the current efi_high_alloc() code it's possible to
        allocate memory above 'max', because efi_high_alloc() doesn't check
        that the tail of the allocation is below 'max'.
      
        If you have an EFI memory map with a single entry that looks like so,
      
         [0xc0000000-0xc0004000]
      
        And want to allocate 0x3000 bytes below 0xc0003000 the current code
        will allocate [0xc0001000-0xc0004000], not [0xc0000000-0xc0003000]
        like you would expect. - Matt ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7ed620bb
  7. 18 2月, 2015 1 次提交
    • M
      Revert "efi/libstub: Call get_memory_map() to obtain map and desc sizes" · 43a9f696
      Matt Fleming 提交于
      This reverts commit d1a8d66b.
      
      Ard reported a boot failure when running UEFI under Qemu and Xen and
      experimenting with various Tianocore build options,
      
       "As it turns out, when allocating room for the UEFI memory map using
        UEFI's AllocatePool (), it may result in two new memory map entries
        being created, for instance, when using Tianocore's preallocated region
        feature. For example, the following region
      
        0x00005ead5000-0x00005ebfffff [Conventional Memory|   |  |  |  |  |WB|WT|WC|UC]
      
        may be split like this
      
        0x00005ead5000-0x00005eae2fff [Conventional Memory|   |  |  |  |  |WB|WT|WC|UC]
        0x00005eae3000-0x00005eae4fff [Loader Data        |   |  |  |  |  |WB|WT|WC|UC]
        0x00005eae5000-0x00005ebfffff [Conventional Memory|   |  |  |  |  |WB|WT|WC|UC]
      
        if the preallocated Loader Data region was chosen to be right in the
        middle of the original free space.
      
        After patch d1a8d66b ("efi/libstub: Call get_memory_map() to
        obtain map and desc sizes"), this is not being dealt with correctly
        anymore, as the existing logic to allocate room for a single additional
        entry has become insufficient."
      
      Mark requested to reinstate the old loop we had before commit
      d1a8d66b, which grows the memory map buffer until it's big enough to
      hold the EFI memory map.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      43a9f696
  8. 14 2月, 2015 2 次提交
    • A
      x86_64: kasan: add interceptors for memset/memmove/memcpy functions · 393f203f
      Andrey Ryabinin 提交于
      Recently instrumentation of builtin functions calls was removed from GCC
      5.0.  To check the memory accessed by such functions, userspace asan
      always uses interceptors for them.
      
      So now we should do this as well.  This patch declares
      memset/memmove/memcpy as weak symbols.  In mm/kasan/kasan.c we have our
      own implementation of those functions which checks memory before accessing
      it.
      
      Default memset/memmove/memcpy now now always have aliases with '__'
      prefix.  For files that built without kasan instrumentation (e.g.
      mm/slub.c) original mem* replaced (via #define) with prefixed variants,
      cause we don't want to check memory accesses there.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      393f203f
    • A
      kasan: add kernel address sanitizer infrastructure · 0b24becc
      Andrey Ryabinin 提交于
      Kernel Address sanitizer (KASan) is a dynamic memory error detector.  It
      provides fast and comprehensive solution for finding use-after-free and
      out-of-bounds bugs.
      
      KASAN uses compile-time instrumentation for checking every memory access,
      therefore GCC > v4.9.2 required.  v4.9.2 almost works, but has issues with
      putting symbol aliases into the wrong section, which breaks kasan
      instrumentation of globals.
      
      This patch only adds infrastructure for kernel address sanitizer.  It's
      not available for use yet.  The idea and some code was borrowed from [1].
      
      Basic idea:
      
      The main idea of KASAN is to use shadow memory to record whether each byte
      of memory is safe to access or not, and use compiler's instrumentation to
      check the shadow memory on each memory access.
      
      Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
      memory and uses direct mapping with a scale and offset to translate a
      memory address to its corresponding shadow address.
      
      Here is function to translate address to corresponding shadow address:
      
           unsigned long kasan_mem_to_shadow(unsigned long addr)
           {
                      return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
           }
      
      where KASAN_SHADOW_SCALE_SHIFT = 3.
      
      So for every 8 bytes there is one corresponding byte of shadow memory.
      The following encoding used for each shadow byte: 0 means that all 8 bytes
      of the corresponding memory region are valid for access; k (1 <= k <= 7)
      means that the first k bytes are valid for access, and other (8 - k) bytes
      are not; Any negative value indicates that the entire 8-bytes are
      inaccessible.  Different negative values used to distinguish between
      different kinds of inaccessible memory (redzones, freed memory) (see
      mm/kasan/kasan.h).
      
      To be able to detect accesses to bad memory we need a special compiler.
      Such compiler inserts a specific function calls (__asan_load*(addr),
      __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
      
      These functions check whether memory region is valid to access or not by
      checking corresponding shadow memory.  If access is not valid an error
      printed.
      
      Historical background of the address sanitizer from Dmitry Vyukov:
      
      	"We've developed the set of tools, AddressSanitizer (Asan),
      	ThreadSanitizer and MemorySanitizer, for user space. We actively use
      	them for testing inside of Google (continuous testing, fuzzing,
      	running prod services). To date the tools have found more than 10'000
      	scary bugs in Chromium, Google internal codebase and various
      	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
      	lots of others): [2] [3] [4].
      	The tools are part of both gcc and clang compilers.
      
      	We have not yet done massive testing under the Kernel AddressSanitizer
      	(it's kind of chicken and egg problem, you need it to be upstream to
      	start applying it extensively). To date it has found about 50 bugs.
      	Bugs that we've found in upstream kernel are listed in [5].
      	We've also found ~20 bugs in out internal version of the kernel. Also
      	people from Samsung and Oracle have found some.
      
      	[...]
      
      	As others noted, the main feature of AddressSanitizer is its
      	performance due to inline compiler instrumentation and simple linear
      	shadow memory. User-space Asan has ~2x slowdown on computational
      	programs and ~2x memory consumption increase. Taking into account that
      	kernel usually consumes only small fraction of CPU and memory when
      	running real user-space programs, I would expect that kernel Asan will
      	have ~10-30% slowdown and similar memory consumption increase (when we
      	finish all tuning).
      
      	I agree that Asan can well replace kmemcheck. We have plans to start
      	working on Kernel MemorySanitizer that finds uses of unitialized
      	memory. Asan+Msan will provide feature-parity with kmemcheck. As
      	others noted, Asan will unlikely replace debug slab and pagealloc that
      	can be enabled at runtime. Asan uses compiler instrumentation, so even
      	if it is disabled, it still incurs visible overheads.
      
      	Asan technology is easily portable to other architectures. Compiler
      	instrumentation is fully portable. Runtime has some arch-dependent
      	parts like shadow mapping and atomic operation interception. They are
      	relatively easy to port."
      
      Comparison with other debugging features:
      ========================================
      
      KMEMCHECK:
      
        - KASan can do almost everything that kmemcheck can.  KASan uses
          compile-time instrumentation, which makes it significantly faster than
          kmemcheck.  The only advantage of kmemcheck over KASan is detection of
          uninitialized memory reads.
      
          Some brief performance testing showed that kasan could be
          x500-x600 times faster than kmemcheck:
      
      $ netperf -l 30
      		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
      		Recv   Send    Send
      		Socket Socket  Message  Elapsed
      		Size   Size    Size     Time     Throughput
      		bytes  bytes   bytes    secs.    10^6bits/sec
      
      no debug:	87380  16384  16384    30.00    41624.72
      
      kasan inline:	87380  16384  16384    30.00    12870.54
      
      kasan outline:	87380  16384  16384    30.00    10586.39
      
      kmemcheck: 	87380  16384  16384    30.03      20.23
      
        - Also kmemcheck couldn't work on several CPUs.  It always sets
          number of CPUs to 1.  KASan doesn't have such limitation.
      
      DEBUG_PAGEALLOC:
      	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
      	  granularity level, so it able to find more bugs.
      
      SLUB_DEBUG (poisoning, redzones):
      	- SLUB_DEBUG has lower overhead than KASan.
      
      	- SLUB_DEBUG in most cases are not able to detect bad reads,
      	  KASan able to detect both reads and writes.
      
      	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
      	  bugs only on allocation/freeing of object. KASan catch
      	  bugs right before it will happen, so we always know exact
      	  place of first bad read/write.
      
      [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
      [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
      [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
      [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
      [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
      
      Based on work by Andrey Konovalov.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Acked-by: NMichal Marek <mmarek@suse.cz>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0b24becc
  9. 21 1月, 2015 1 次提交
    • A
      efi/libstub: Call get_memory_map() to obtain map and desc sizes · d1a8d66b
      Ard Biesheuvel 提交于
      This fixes two minor issues in the implementation of get_memory_map():
      - Currently, it assumes that sizeof(efi_memory_desc_t) == desc_size,
        which is usually true, but not mandated by the spec. (This was added
        intentionally to allow future additions to the definition of
        efi_memory_desc_t). The way the loop is implemented currently, the
        added slack space may be insufficient if desc_size is larger, which in
        some corner cases could result in the loop never terminating.
      - It allocates 32 efi_memory_desc_t entries first (again, using the size
        of the struct instead of desc_size), and frees and reallocates if it
        turns out to be insufficient. Few implementations of UEFI have such small
        memory maps, which results in a unnecessary allocate/free pair on each
        invocation.
      
      Fix this by calling the get_memory_map() boot service first with a '0'
      input value for map size to retrieve the map size and desc size from the
      firmware and only then perform the allocation, using desc_size rather
      than sizeof(efi_memory_desc_t).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      d1a8d66b
  10. 16 1月, 2015 1 次提交
    • A
      arm64/efi: efistub: Apply __init annotation · ddeeefe2
      Ard Biesheuvel 提交于
      This ensures all stub component are freed when the kernel proper is
      done booting, by prefixing the names of all ELF sections that have
      the SHF_ALLOC attribute with ".init". This approach ensures that even
      implicitly emitted allocated data (like initializer values and string
      literals) are covered.
      
      At the same time, remove some __init annotations in the stub that have
      now become redundant, and add the __init annotation to handle_kernel_image
      which will now trigger a section mismatch warning without it.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      ddeeefe2
  11. 13 1月, 2015 1 次提交
    • A
      arm64/efi: move SetVirtualAddressMap() to UEFI stub · f3cdfd23
      Ard Biesheuvel 提交于
      In order to support kexec, the kernel needs to be able to deal with the
      state of the UEFI firmware after SetVirtualAddressMap() has been called.
      To avoid having separate code paths for non-kexec and kexec, let's move
      the call to SetVirtualAddressMap() to the stub: this will guarantee us
      that it will only be called once (since the stub is not executed during
      kexec), and ensures that the UEFI state is identical between kexec and
      normal boot.
      
      This implies that the layout of the virtual mapping needs to be created
      by the stub as well. All regions are rounded up to a naturally aligned
      multiple of 64 KB (for compatibility with 64k pages kernels) and recorded
      in the UEFI memory map. The kernel proper reads those values and installs
      the mappings in a dedicated set of page tables that are swapped in during
      UEFI Runtime Services calls.
      Acked-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NMatt Fleming <matt.fleming@intel.com>
      Tested-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      f3cdfd23
  12. 12 1月, 2015 1 次提交
  13. 05 11月, 2014 1 次提交
    • M
      efi: efi-stub: notify on DTB absence · 0bcaa904
      Mark Rutland 提交于
      In the absence of a DTB configuration table, the EFI stub will happily
      continue attempting to boot a kernel, despite the fact that this kernel
      may not function without a description of the hardware. In this case, as
      with a typo'd "dtb=" option (e.g. "dbt=") or many other possible
      failures, the only output seen by the user will be the rather terse
      output from the EFI stub:
      
      EFI stub: Booting Linux Kernel...
      
      To aid those attempting to debug such failures, this patch adds a notice
      when no DTB is found, making the output more helpful:
      
      EFI stub: Booting Linux Kernel...
      EFI stub: Generating empty DTB
      
      Additionally, a positive acknowledgement is added when a user-specified
      DTB is in use:
      
      EFI stub: Booting Linux Kernel...
      EFI stub: Using DTB from command line
      
      Similarly, a positive acknowledgement is added when a DTB from a
      configuration table is in use:
      
      EFI stub: Booting Linux Kernel...
      EFI stub: Using DTB from configuration table
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NRoy Franz <roy.franz@linaro.org>
      Acked-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      0bcaa904
  14. 04 10月, 2014 1 次提交
    • M
      efi: Add efi= parameter parsing to the EFI boot stub · 5a17dae4
      Matt Fleming 提交于
      We need a way to customize the behaviour of the EFI boot stub, in
      particular, we need a way to disable the "chunking" workaround, used
      when reading files from the EFI System Partition.
      
      One of my machines doesn't cope well when reading files in 1MB chunks to
      a buffer above the 4GB mark - it appears that the "chunking" bug
      workaround triggers another firmware bug. This was only discovered with
      commit 4bf7111f ("x86/efi: Support initrd loaded above 4G"), and
      that commit is perfectly valid. The symptom I observed was a corrupt
      initrd rather than any kind of crash.
      
      efi= is now used to specify EFI parameters in two very different
      execution environments, the EFI boot stub and during kernel boot.
      
      There is also a slight performance optimization by enabling efi=nochunk,
      but that's offset by the fact that you're more likely to run into
      firmware issues, at least on x86. This is the rationale behind leaving
      the workaround enabled by default.
      
      Also provide some documentation for EFI_READ_CHUNK_SIZE and why we're
      using the current value of 1MB.
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Roy Franz <roy.franz@linaro.org>
      Cc: Maarten Lankhorst <m.b.lankhorst@gmail.com>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Borislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      5a17dae4
  15. 09 9月, 2014 1 次提交
    • M
      efi/arm64: Fix fdt-related memory reservation · 0ceac9e0
      Mark Salter 提交于
      Commit 86c8b27a:
       "arm64: ignore DT memreserve entries when booting in UEFI mode
      
      prevents early_init_fdt_scan_reserved_mem() from being called for
      arm64 kernels booting via UEFI. This was done because the kernel
      will use the UEFI memory map to determine reserved memory regions.
      That approach has problems in that early_init_fdt_scan_reserved_mem()
      also reserves the FDT itself and any node-specific reserved memory.
      By chance of some kernel configs, the FDT may be overwritten before
      it can be unflattened and the kernel will fail to boot. More subtle
      problems will result if the FDT has node specific reserved memory
      which is not really reserved.
      
      This patch has the UEFI stub remove the memory reserve map entries
      from the FDT as it does with the memory nodes. This allows
      early_init_fdt_scan_reserved_mem() to be called unconditionally
      so that the other needed reservations are made.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0ceac9e0
  16. 19 7月, 2014 1 次提交
    • A
      efi: efistub: Convert into static library · f4f75ad5
      Ard Biesheuvel 提交于
      This patch changes both x86 and arm64 efistub implementations
      from #including shared .c files under drivers/firmware/efi to
      building shared code as a static library.
      
      The x86 code uses a stub built into the boot executable which
      uncompresses the kernel at boot time. In this case, the library is
      linked into the decompressor.
      
      In the arm64 case, the stub is part of the kernel proper so the library
      is linked into the kernel proper as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f4f75ad5