1. 29 3月, 2016 1 次提交
    • H
      x86/build: Build compressed x86 kernels as PIE · 6d92bc9d
      H.J. Lu 提交于
      The 32-bit x86 assembler in binutils 2.26 will generate R_386_GOT32X
      relocation to get the symbol address in PIC.  When the compressed x86
      kernel isn't built as PIC, the linker optimizes R_386_GOT32X relocations
      to their fixed symbol addresses.  However, when the compressed x86
      kernel is loaded at a different address, it leads to the following
      load failure:
      
        Failed to allocate space for phdrs
      
      during the decompression stage.
      
      If the compressed x86 kernel is relocatable at run-time, it should be
      compiled with -fPIE, instead of -fPIC, if possible and should be built as
      Position Independent Executable (PIE) so that linker won't optimize
      R_386_GOT32X relocation to its fixed symbol address.
      
      Older linkers generate R_386_32 relocations against locally defined
      symbols, _bss, _ebss, _got and _egot, in PIE.  It isn't wrong, just less
      optimal than R_386_RELATIVE.  But the x86 kernel fails to properly handle
      R_386_32 relocations when relocating the kernel.  To generate
      R_386_RELATIVE relocations, we mark _bss, _ebss, _got and _egot as
      hidden in both 32-bit and 64-bit x86 kernels.
      
      To build a 64-bit compressed x86 kernel as PIE, we need to disable the
      relocation overflow check to avoid relocation overflow errors. We do
      this with a new linker command-line option, -z noreloc-overflow, which
      got added recently:
      
       commit 4c10bbaa0912742322f10d9d5bb630ba4e15dfa7
       Author: H.J. Lu <hjl.tools@gmail.com>
       Date:   Tue Mar 15 11:07:06 2016 -0700
      
          Add -z noreloc-overflow option to x86-64 ld
      
          Add -z noreloc-overflow command-line option to the x86-64 ELF linker to
          disable relocation overflow check.  This can be used to avoid relocation
          overflow check if there will be no dynamic relocation overflow at
          run-time.
      
      The 64-bit compressed x86 kernel is built as PIE only if the linker supports
      -z noreloc-overflow.  So far 64-bit relocatable compressed x86 kernel
      boots fine even when it is built as a normal executable.
      Signed-off-by: NH.J. Lu <hjl.tools@gmail.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      [ Edited the changelog and comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6d92bc9d
  2. 23 3月, 2016 1 次提交
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  3. 29 2月, 2016 1 次提交
    • J
      objtool: Mark non-standard object files and directories · c0dd6716
      Josh Poimboeuf 提交于
      Code which runs outside the kernel's normal mode of operation often does
      unusual things which can cause a static analysis tool like objtool to
      emit false positive warnings:
      
       - boot image
       - vdso image
       - relocation
       - realmode
       - efi
       - head
       - purgatory
       - modpost
      
      Set OBJECT_FILES_NON_STANDARD for their related files and directories,
      which will tell objtool to skip checking them.  It's ok to skip them
      because they don't affect runtime stack traces.
      
      Also skip the following code which does the right thing with respect to
      frame pointers, but is too "special" to be validated by a tool:
      
       - entry
       - mcount
      
      Also skip the test_nx module because it modifies its exception handling
      table at runtime, which objtool can't understand.  Fortunately it's
      just a test module so it doesn't matter much.
      
      Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
      might eventually be useful for other tools.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c0dd6716
  4. 21 1月, 2016 1 次提交
  5. 14 2月, 2015 1 次提交
    • A
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  6. 13 2月, 2015 1 次提交
    • M
      x86/efi: Avoid triple faults during EFI mixed mode calls · 96738c69
      Matt Fleming 提交于
      Andy pointed out that if an NMI or MCE is received while we're in the
      middle of an EFI mixed mode call a triple fault will occur. This can
      happen, for example, when issuing an EFI mixed mode call while running
      perf.
      
      The reason for the triple fault is that we execute the mixed mode call
      in 32-bit mode with paging disabled but with 64-bit kernel IDT handlers
      installed throughout the call.
      
      At Andy's suggestion, stop playing the games we currently do at runtime,
      such as disabling paging and installing a 32-bit GDT for __KERNEL_CS. We
      can simply switch to the __KERNEL32_CS descriptor before invoking
      firmware services, and run in compatibility mode. This way, if an
      NMI/MCE does occur the kernel IDT handler will execute correctly, since
      it'll jump to __KERNEL_CS automatically.
      
      However, this change is only possible post-ExitBootServices(). Before
      then the firmware "owns" the machine and expects for its 32-bit IDT
      handlers to be left intact to service interrupts, etc.
      
      So, we now need to distinguish between early boot and runtime
      invocations of EFI services. During early boot, we need to restore the
      GDT that the firmware expects to be present. We can only jump to the
      __KERNEL32_CS code segment for mixed mode calls after ExitBootServices()
      has been invoked.
      
      A liberal sprinkling of comments in the thunking code should make the
      differences in early and late environments more apparent.
      Reported-by: NAndy Lutomirski <luto@amacapital.net>
      Tested-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      96738c69
  7. 27 1月, 2015 1 次提交
  8. 24 11月, 2014 1 次提交
  9. 12 11月, 2014 1 次提交
    • A
      efi/x86: Move x86 back to libstub · 243b6754
      Ard Biesheuvel 提交于
      This reverts commit 84be8805, which itself reverted my original
      attempt to move x86 from #include'ing .c files from across the tree
      to using the EFI stub built as a static library.
      
      The issue that affected the original approach was that splitting
      the implementation into several .o files resulted in the variable
      'efi_early' becoming a global with external linkage, which under
      -fPIC implies that references to it must go through the GOT. However,
      dealing with this additional GOT entry turned out to be troublesome
      on some EFI implementations. (GCC's visibility=hidden attribute is
      supposed to lift this requirement, but it turned out not to work on
      the 32-bit build.)
      
      Instead, use a pure getter function to get a reference to efi_early.
      This approach results in no additional GOT entries being generated,
      so there is no need for any changes in the early GOT handling.
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      243b6754
  10. 02 11月, 2014 2 次提交
  11. 24 9月, 2014 1 次提交
    • M
      Revert "efi/x86: efistub: Move shared dependencies to <asm/efi.h>" · 84be8805
      Matt Fleming 提交于
      This reverts commit f23cf8bd ("efi/x86: efistub: Move shared
      dependencies to <asm/efi.h>") as well as the x86 parts of commit
      f4f75ad5 ("efi: efistub: Convert into static library").
      
      The road leading to these two reverts is long and winding.
      
      The above two commits were merged during the v3.17 merge window and
      turned the common EFI boot stub code into a static library. This
      necessitated making some symbols global in the x86 boot stub which
      introduced new entries into the early boot GOT.
      
      The problem was that we weren't fixing up the newly created GOT entries
      before invoking the EFI boot stub, which sometimes resulted in hangs or
      resets. This failure was reported by Maarten on his Macbook pro.
      
      The proposed fix was commit 9cb0e394 ("x86/efi: Fixup GOT in all
      boot code paths"). However, that caused issues for Linus when booting
      his Sony Vaio Pro 11. It was subsequently reverted in commit
      f3670394.
      
      So that leaves us back with Maarten's Macbook pro not booting.
      
      At this stage in the release cycle the least risky option is to revert
      the x86 EFI boot stub to the pre-merge window code structure where we
      explicitly #include efi-stub-helper.c instead of linking with the static
      library. The arm64 code remains unaffected.
      
      We can take another swing at the x86 parts for v3.18.
      
      Conflicts:
      	arch/x86/include/asm/efi.h
      Tested-by: NJosh Boyer <jwboyer@fedoraproject.org>
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Tested-by: Leif Lindholm <leif.lindholm@linaro.org> [arm64]
      Tested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      84be8805
  12. 18 8月, 2014 3 次提交
  13. 19 7月, 2014 1 次提交
    • A
      efi: efistub: Convert into static library · f4f75ad5
      Ard Biesheuvel 提交于
      This patch changes both x86 and arm64 efistub implementations
      from #including shared .c files under drivers/firmware/efi to
      building shared code as a static library.
      
      The x86 code uses a stub built into the boot executable which
      uncompresses the kernel at boot time. In this case, the library is
      linked into the decompressor.
      
      In the arm64 case, the stub is part of the kernel proper so the library
      is linked into the kernel proper as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f4f75ad5
  14. 10 12月, 2013 1 次提交
    • H
      x86, build: Pass in additional -mno-mmx, -mno-sse options · 8b3b005d
      H. Peter Anvin 提交于
      In checkin
      
          5551a34e x86-64, build: Always pass in -mno-sse
      
      we unconditionally added -mno-sse to the main build, to keep newer
      compilers from generating SSE instructions from autovectorization.
      However, this did not extend to the special environments
      (arch/x86/boot, arch/x86/boot/compressed, and arch/x86/realmode/rm).
      Add -mno-sse to the compiler command line for these environments, and
      add -mno-mmx to all the environments as well, as we don't want a
      compiler to generate MMX code either.
      
      This patch also removes a $(cc-option) call for -m32, since we have
      long since stopped supporting compilers too old for the -m32 option,
      and in fact hardcode it in other places in the Makefiles.
      Reported-by: NKevin B. Smith <kevin.b.smith@intel.com>
      Cc: Sunil K. Pandey <sunil.k.pandey@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: H. J. Lu <hjl.tools@gmail.com>
      Link: http://lkml.kernel.org/n/tip-j21wzqv790q834n7yc6g80j1@git.kernel.org
      Cc: <stable@vger.kernel.org> # build fix only
      8b3b005d
  15. 13 10月, 2013 2 次提交
  16. 10 7月, 2013 1 次提交
  17. 17 4月, 2013 2 次提交
  18. 06 4月, 2013 1 次提交
    • J
      x86: Fix rebuild with EFI_STUB enabled · 91870824
      Jan Beulich 提交于
      eboot.o and efi_stub_$(BITS).o didn't get added to "targets", and hence
      their .cmd files don't get included by the build machinery, leading to
      the files always getting rebuilt.
      
      Rather than adding the two files individually, take the opportunity and
      add $(VMLINUX_OBJS) to "targets" instead, thus allowing the assignment
      at the top of the file to be shrunk quite a bit.
      
      At the same time, remove a pointless flags override line - the variable
      assigned to was misspelled anyway, and the options added are
      meaningless for assembly sources.
      
      [ hpa: the patch is not minimal, but I am taking it for -urgent anyway
        since the excess impact of the patch seems to be small enough. ]
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Link: http://lkml.kernel.org/r/515C5D2502000078000CA6AD@nat28.tlf.novell.com
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      91870824
  19. 17 9月, 2012 1 次提交
  20. 19 5月, 2012 1 次提交
    • H
      x86, realmode: 16-bit real-mode code support for relocs tool · 6520fe55
      H. Peter Anvin 提交于
      A new option is added to the relocs tool called '--realmode'.
      This option causes the generation of 16-bit segment relocations
      and 32-bit linear relocations for the real-mode code. When
      the real-mode code is moved to the low-memory during kernel
      initialization, these relocation entries can be used to
      relocate the code properly.
      
      In the assembly code 16-bit segment relocations must be relative
      to the 'real_mode_seg' absolute symbol. Linear relocations must be
      relative to a symbol prefixed with 'pa_'.
      
      16-bit segment relocation is used to load cs:ip in 16-bit code.
      Linear relocations are used in the 32-bit code for relocatable
      data references. They are declared in the linker script of the
      real-mode code.
      
      The relocs tool is moved to arch/x86/tools/relocs.c, and added new
      target archscripts that can be used to build scripts needed building
      an architecture.  be compiled before building the arch/x86 tree.
      
      [ hpa: accelerating this because it detects invalid absolute
        relocations, a serious bug in binutils 2.22.52.0.x which currently
        produces bad kernels. ]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/1336501366-28617-2-git-send-email-jarkko.sakkinen@intel.comSigned-off-by: NJarkko Sakkinen <jarkko.sakkinen@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      6520fe55
  21. 09 5月, 2012 2 次提交
  22. 29 2月, 2012 1 次提交
  23. 13 12月, 2011 1 次提交
    • M
      x86, efi: EFI boot stub support · 291f3632
      Matt Fleming 提交于
      There is currently a large divide between kernel development and the
      development of EFI boot loaders. The idea behind this patch is to give
      the kernel developers full control over the EFI boot process. As
      H. Peter Anvin put it,
      
      "The 'kernel carries its own stub' approach been very successful in
      dealing with BIOS, and would make a lot of sense to me for EFI as
      well."
      
      This patch introduces an EFI boot stub that allows an x86 bzImage to
      be loaded and executed by EFI firmware. The bzImage appears to the
      firmware as an EFI application. Luckily there are enough free bits
      within the bzImage header so that it can masquerade as an EFI
      application, thereby coercing the EFI firmware into loading it and
      jumping to its entry point. The beauty of this masquerading approach
      is that both BIOS and EFI boot loaders can still load and run the same
      bzImage, thereby allowing a single kernel image to work in any boot
      environment.
      
      The EFI boot stub supports multiple initrds, but they must exist on
      the same partition as the bzImage. Command-line arguments for the
      kernel can be appended after the bzImage name when run from the EFI
      shell, e.g.
      
      Shell> bzImage console=ttyS0 root=/dev/sdb initrd=initrd.img
      
      v7:
       - Fix checkpatch warnings.
      
      v6:
      
       - Try to allocate initrd memory just below hdr->inird_addr_max.
      
      v5:
      
       - load_options_size is UTF-16, which needs dividing by 2 to convert
         to the corresponding ASCII size.
      
      v4:
      
       - Don't read more than image->load_options_size
      
      v3:
      
       - Fix following warnings when compiling CONFIG_EFI_STUB=n
      
         arch/x86/boot/tools/build.c: In function ‘main’:
         arch/x86/boot/tools/build.c:138:24: warning: unused variable ‘pe_header’
         arch/x86/boot/tools/build.c:138:15: warning: unused variable ‘file_sz’
      
       - As reported by Matthew Garrett, some Apple machines have GOPs that
         don't have hardware attached. We need to weed these out by
         searching for ones that handle the PCIIO protocol.
      
       - Don't allocate memory if no initrds are on cmdline
       - Don't trust image->load_options_size
      
      Maarten Lankhorst noted:
       - Don't strip first argument when booted from efibootmgr
       - Don't allocate too much memory for cmdline
       - Don't update cmdline_size, the kernel considers it read-only
       - Don't accept '\n' for initrd names
      
      v2:
      
       - File alignment was too large, was 8192 should be 512. Reported by
         Maarten Lankhorst on LKML.
       - Added UGA support for graphics
       - Use VIDEO_TYPE_EFI instead of hard-coded number.
       - Move linelength assignment until after we've assigned depth
       - Dynamically fill out AddressOfEntryPoint in tools/build.c
       - Don't use magic number for GDT/TSS stuff. Requested by Andi Kleen
       - The bzImage may need to be relocated as it may have been loaded at
         a high address address by the firmware. This was required to get my
         macbook booting because the firmware loaded it at 0x7cxxxxxx, which
         triggers this error in decompress_kernel(),
      
      	if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
      		error("Destination address too large");
      
      Cc: Mike Waychison <mikew@google.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Tested-by: NHenrik Rydberg <rydberg@euromail.se>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/1321383097.2657.9.camel@mfleming-mobl1.ger.corp.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      291f3632
  24. 14 1月, 2011 1 次提交
    • L
      x86: support XZ-compressed kernel · 30314804
      Lasse Collin 提交于
      This integrates the XZ decompression code to the x86 pre-boot code.
      
      mkpiggy.c is updated to reserve about 32 KiB more buffer safety margin for
      kernel decompression.  It is done unconditionally for all decompressors to
      keep the code simpler.
      
      The XZ decompressor needs around 30 KiB of heap, so the heap size is
      increased to 32 KiB on both x86-32 and x86-64.
      
      Documentation/x86/boot.txt is updated to list the XZ magic number.
      
      With the x86 BCJ filter in XZ, XZ-compressed x86 kernel tends to be a few
      percent smaller than the equivalent LZMA-compressed kernel.
      Signed-off-by: NLasse Collin <lasse.collin@tukaani.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Alain Knaff <alain@knaff.lu>
      Cc: Albin Tonnerre <albin.tonnerre@free-electrons.com>
      Cc: Phillip Lougher <phillip@lougher.demon.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      30314804
  25. 03 8月, 2010 1 次提交
    • Y
      x86, setup: enable early console output from the decompressor · 8fee13a4
      Yinghai Lu 提交于
      This enables the decompressor output to be seen on the serial console.
      Most of the code is shared with the regular boot code.
      
      We could add printf to the decompressor if needed, but currently there
      is no sufficiently compelling user.
      
      -v2: define BOOT_BOOT_H to avoid include boot.h
      -v3: early_serial_base need to be static in misc.c ?
      -v4: create seperate string.c printf.c cmdline.c early_serial_console.c
           after hpa's patch that allow global variables in compressed/misc stage
      -v5: remove printf.c related
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      8fee13a4
  26. 12 1月, 2010 1 次提交
  27. 26 12月, 2009 1 次提交
    • H
      x86, compress: Force i386 instructions for the decompressor · 17a2a9b5
      H. Peter Anvin 提交于
      Recently, some distros have started shipping versions of gcc which
      default to -march=i686.  This breaks building kernels for pre-i686
      machines, even if they have been selected in Kconfig, due to the
      generation of CMOV instructions.
      
      There isn't enough benefit to try to preserve the generation of these
      instructions even when selected, so simply force -march=i386 for the
      decompressor when building a 32-bit kernel.
      Reported-and-tested-by: NChris Rankin <rankincj@yahoo.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <219280.97558.qm@web52907.mail.re2.yahoo.com>
      17a2a9b5
  28. 21 8月, 2009 1 次提交
  29. 19 6月, 2009 1 次提交
    • P
      gcov: enable GCOV_PROFILE_ALL for x86_64 · 7bf99fb6
      Peter Oberparleiter 提交于
      Enable gcov profiling of the entire kernel on x86_64. Required changes
      include disabling profiling for:
      
      * arch/kernel/acpi/realmode and arch/kernel/boot/compressed:
        not linked to main kernel
      * arch/vdso, arch/kernel/vsyscall_64 and arch/kernel/hpet:
        profiling causes segfaults during boot (incompatible context)
      Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Li Wei <W.Li@Sun.COM>
      Cc: Michael Ellerman <michaele@au1.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Heiko Carstens <heicars2@linux.vnet.ibm.com>
      Cc: Martin Schwidefsky <mschwid2@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: WANG Cong <xiyou.wangcong@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bf99fb6
  30. 09 5月, 2009 3 次提交
  31. 27 4月, 2009 1 次提交
    • L
      x86: unify arch/x86/boot/compressed/vmlinux_*.lds · 51b26ada
      Linus Torvalds 提交于
      Look at the:
      
      	diff -u arch/x86/boot/compressed/vmlinux_*.lds
      
      output and realize that they're basially exactly the same except for
      trivial naming differences, and the fact that the 64-bit version has a
      "pgtable" thing.
      
      So unify them.
      
      There's some trivial cleanup there (make the output format a Kconfig thing
      rather than doing #ifdef's for it, and unify both 32-bit and 64-bit BSS
      end to "_ebss", where 32-bit used to use the traditional "_end"), but
      other than that it's really very mindless and straigt conversion.
      
      For example, I think we should aim to remove "startup_32" vs "startup_64",
      and just call it "startup", and get rid of one more difference. I didn't
      do that.
      
      Also, notice the comment in the unified vmlinux.lds.S talks about
      "head_64" and "startup_32" which is an odd and incorrect mix, but that was
      actually what the old 64-bit only lds file had, so the confusion isn't
      new, and now that mixing is arguably more accurate thanks to the
      vmlinux.lds.S file being shared between the two cases ;)
      
      [ Impact: cleanup, unification ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b26ada
  32. 03 4月, 2009 1 次提交