1. 03 5月, 2016 1 次提交
    • K
      x86/boot: Extract error reporting functions · dc425a6e
      Kees Cook 提交于
      Currently to use warn(), a caller would need to include misc.h. However,
      this means they would get the (unavailable during compressed boot)
      gcc built-in memcpy family of functions. But since string.c is defining
      these memcpy functions for use by misc.c, we end up in a weird circular
      dependency.
      
      To break this loop, move the error reporting functions outside of misc.c
      with their own header so that they can be independently included by
      other sources. Since the screen-writing routines use memmove(), keep the
      low-level *_putstr() functions in misc.c.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Lasse Collin <lasse.collin@tukaani.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1462229461-3370-2-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dc425a6e
  2. 29 4月, 2016 3 次提交
    • Y
      x86/boot: Correctly bounds-check relocations · 4abf061b
      Yinghai Lu 提交于
      Relocation handling performs bounds checking on the resulting calculated
      addresses. The existing code uses output_len (VO size plus relocs size) as
      the max address. This is not right since the max_addr check should stop at
      the end of VO and exclude bss, brk, etc, which follows.  The valid range
      should be VO [_text, __bss_start] in the loaded physical address space.
      
      This patch adds an export for __bss_start in voffset.h and uses it to
      set the correct limit for max_addr.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      [ Rewrote the changelog. ]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: lasse.collin@tukaani.org
      Link: http://lkml.kernel.org/r/1461888548-32439-7-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4abf061b
    • Y
      x86/KASLR: Clean up unused code from old 'run_size' and rename it to 'kernel_total_size' · 4d2d5424
      Yinghai Lu 提交于
      Since 'run_size' is now calculated in misc.c, the old script and associated
      argument passing is no longer needed. This patch removes them, and renames
      'run_size' to the more descriptive 'kernel_total_size'.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      [ Rewrote the changelog, renamed 'run_size' to 'kernel_total_size' ]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: lasse.collin@tukaani.org
      Link: http://lkml.kernel.org/r/1461888548-32439-6-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d2d5424
    • Y
      x86/boot: Fix "run_size" calculation · 67b66625
      Yinghai Lu 提交于
      Currently, the "run_size" variable holds the total kernel size
      (size of code plus brk and bss) and is calculated via the shell script
      arch/x86/tools/calc_run_size.sh. It gets the file offset and mem size
      of the .bss and .brk sections from the vmlinux, and adds them as follows:
      
        run_size = $(( $offsetA + $sizeA + $sizeB ))
      
      However, this is not correct (it is too large). To illustrate, here's
      a walk-through of the script's calculation, compared to the correct way
      to find it.
      
      First, offsetA is found as the starting address of the first .bss or
      .brk section seen in the ELF file. The sizeA and sizeB values are the
      respective section sizes.
      
       [bhe@x1 linux]$ objdump -h vmlinux
      
       vmlinux:     file format elf64-x86-64
      
       Sections:
       Idx Name    Size      VMA               LMA               File off  Algn
        27 .bss    00170000  ffffffff81ec8000  0000000001ec8000  012c8000  2**12
                   ALLOC
        28 .brk    00027000  ffffffff82038000  0000000002038000  012c8000  2**0
                   ALLOC
      
      Here, offsetA is 0x012c8000, with sizeA at 0x00170000 and sizeB at
      0x00027000. The resulting run_size is 0x145f000:
      
       0x012c8000 + 0x00170000 + 0x00027000 = 0x145f000
      
      However, if we instead examine the ELF LOAD program headers, we see a
      different picture.
      
       [bhe@x1 linux]$ readelf -l vmlinux
      
       Elf file type is EXEC (Executable file)
       Entry point 0x1000000
       There are 5 program headers, starting at offset 64
      
       Program Headers:
        Type        Offset             VirtAddr           PhysAddr
                    FileSiz            MemSiz              Flags  Align
        LOAD        0x0000000000200000 0xffffffff81000000 0x0000000001000000
                    0x0000000000b5e000 0x0000000000b5e000  R E    200000
        LOAD        0x0000000000e00000 0xffffffff81c00000 0x0000000001c00000
                    0x0000000000145000 0x0000000000145000  RW     200000
        LOAD        0x0000000001000000 0x0000000000000000 0x0000000001d45000
                    0x0000000000018158 0x0000000000018158  RW     200000
        LOAD        0x000000000115e000 0xffffffff81d5e000 0x0000000001d5e000
                    0x000000000016a000 0x0000000000301000  RWE    200000
        NOTE        0x000000000099bcac 0xffffffff8179bcac 0x000000000179bcac
                    0x00000000000001bc 0x00000000000001bc         4
      
       Section to Segment mapping:
        Segment Sections...
         00     .text .notes __ex_table .rodata __bug_table .pci_fixup .tracedata
                __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param
                __modver
         01     .data .vvar
         02     .data..percpu
         03     .init.text .init.data .x86_cpu_dev.init .parainstructions
                .altinstructions .altinstr_replacement .iommu_table .apicdrivers
                .exit.text .smp_locks .bss .brk
         04     .notes
      
      As mentioned, run_size needs to be the size of the running kernel
      including .bss and .brk. We can see from the Section/Segment mapping
      above that .bss and .brk are included in segment 03 (which corresponds
      to the final LOAD program header). To find the run_size, we calculate
      the end of the LOAD segment from its PhysAddr start (0x0000000001d5e000)
      and its MemSiz (0x0000000000301000), minus the physical load address of
      the kernel (the first LOAD segment's PhysAddr: 0x0000000001000000). The
      resulting run_size is 0x105f000:
      
       0x0000000001d5e000 + 0x0000000000301000 - 0x0000000001000000 = 0x105f000
      
      So, from this we can see that the existing run_size calculation is
      0x400000 too high. And, as it turns out, the correct run_size is
      actually equal to VO_end - VO_text, which is certainly easier to calculate.
      _end: 0xffffffff8205f000
      _text:0xffffffff81000000
      
       0xffffffff8205f000 - 0xffffffff81000000 = 0x105f000
      
      As a result, run_size is a simple constant, so we don't need to pass it
      around; we already have voffset.h for such things. We can share voffset.h
      between misc.c and header.S instead of getting run_size in other ways.
      This patch moves voffset.h creation code to boot/compressed/Makefile,
      and switches misc.c to use the VO_end - VO_text calculation for run_size.
      
      Dependence before:
      
       boot/header.S ==> boot/voffset.h ==> vmlinux
       boot/header.S ==> compressed/vmlinux ==> compressed/misc.c
      
      Dependence after:
      
       boot/header.S ==> compressed/vmlinux ==> compressed/misc.c ==> boot/voffset.h ==> vmlinux
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      [ Rewrote the changelog. ]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: lasse.collin@tukaani.org
      Fixes: e6023367 ("x86, kaslr: Prevent .bss from overlaping initrd")
      Link: http://lkml.kernel.org/r/1461888548-32439-5-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      67b66625
  3. 19 4月, 2016 1 次提交
    • K
      x86/KASLR: Rename aslr.c to kaslr.c · 9b238748
      Kees Cook 提交于
      In order to avoid confusion over what this file provides, rename it to
      kaslr.c since it is used exclusively for the kernel ASLR, not userspace
      ASLR.
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: H.J. Lu <hjl.tools@gmail.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1460997735-24785-2-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9b238748
  4. 29 3月, 2016 1 次提交
    • H
      x86/build: Build compressed x86 kernels as PIE · 6d92bc9d
      H.J. Lu 提交于
      The 32-bit x86 assembler in binutils 2.26 will generate R_386_GOT32X
      relocation to get the symbol address in PIC.  When the compressed x86
      kernel isn't built as PIC, the linker optimizes R_386_GOT32X relocations
      to their fixed symbol addresses.  However, when the compressed x86
      kernel is loaded at a different address, it leads to the following
      load failure:
      
        Failed to allocate space for phdrs
      
      during the decompression stage.
      
      If the compressed x86 kernel is relocatable at run-time, it should be
      compiled with -fPIE, instead of -fPIC, if possible and should be built as
      Position Independent Executable (PIE) so that linker won't optimize
      R_386_GOT32X relocation to its fixed symbol address.
      
      Older linkers generate R_386_32 relocations against locally defined
      symbols, _bss, _ebss, _got and _egot, in PIE.  It isn't wrong, just less
      optimal than R_386_RELATIVE.  But the x86 kernel fails to properly handle
      R_386_32 relocations when relocating the kernel.  To generate
      R_386_RELATIVE relocations, we mark _bss, _ebss, _got and _egot as
      hidden in both 32-bit and 64-bit x86 kernels.
      
      To build a 64-bit compressed x86 kernel as PIE, we need to disable the
      relocation overflow check to avoid relocation overflow errors. We do
      this with a new linker command-line option, -z noreloc-overflow, which
      got added recently:
      
       commit 4c10bbaa0912742322f10d9d5bb630ba4e15dfa7
       Author: H.J. Lu <hjl.tools@gmail.com>
       Date:   Tue Mar 15 11:07:06 2016 -0700
      
          Add -z noreloc-overflow option to x86-64 ld
      
          Add -z noreloc-overflow command-line option to the x86-64 ELF linker to
          disable relocation overflow check.  This can be used to avoid relocation
          overflow check if there will be no dynamic relocation overflow at
          run-time.
      
      The 64-bit compressed x86 kernel is built as PIE only if the linker supports
      -z noreloc-overflow.  So far 64-bit relocatable compressed x86 kernel
      boots fine even when it is built as a normal executable.
      Signed-off-by: NH.J. Lu <hjl.tools@gmail.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      [ Edited the changelog and comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6d92bc9d
  5. 23 3月, 2016 1 次提交
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  6. 29 2月, 2016 1 次提交
    • J
      objtool: Mark non-standard object files and directories · c0dd6716
      Josh Poimboeuf 提交于
      Code which runs outside the kernel's normal mode of operation often does
      unusual things which can cause a static analysis tool like objtool to
      emit false positive warnings:
      
       - boot image
       - vdso image
       - relocation
       - realmode
       - efi
       - head
       - purgatory
       - modpost
      
      Set OBJECT_FILES_NON_STANDARD for their related files and directories,
      which will tell objtool to skip checking them.  It's ok to skip them
      because they don't affect runtime stack traces.
      
      Also skip the following code which does the right thing with respect to
      frame pointers, but is too "special" to be validated by a tool:
      
       - entry
       - mcount
      
      Also skip the test_nx module because it modifies its exception handling
      table at runtime, which objtool can't understand.  Fortunately it's
      just a test module so it doesn't matter much.
      
      Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
      might eventually be useful for other tools.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c0dd6716
  7. 21 1月, 2016 1 次提交
  8. 14 2月, 2015 1 次提交
    • A
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  9. 13 2月, 2015 1 次提交
    • M
      x86/efi: Avoid triple faults during EFI mixed mode calls · 96738c69
      Matt Fleming 提交于
      Andy pointed out that if an NMI or MCE is received while we're in the
      middle of an EFI mixed mode call a triple fault will occur. This can
      happen, for example, when issuing an EFI mixed mode call while running
      perf.
      
      The reason for the triple fault is that we execute the mixed mode call
      in 32-bit mode with paging disabled but with 64-bit kernel IDT handlers
      installed throughout the call.
      
      At Andy's suggestion, stop playing the games we currently do at runtime,
      such as disabling paging and installing a 32-bit GDT for __KERNEL_CS. We
      can simply switch to the __KERNEL32_CS descriptor before invoking
      firmware services, and run in compatibility mode. This way, if an
      NMI/MCE does occur the kernel IDT handler will execute correctly, since
      it'll jump to __KERNEL_CS automatically.
      
      However, this change is only possible post-ExitBootServices(). Before
      then the firmware "owns" the machine and expects for its 32-bit IDT
      handlers to be left intact to service interrupts, etc.
      
      So, we now need to distinguish between early boot and runtime
      invocations of EFI services. During early boot, we need to restore the
      GDT that the firmware expects to be present. We can only jump to the
      __KERNEL32_CS code segment for mixed mode calls after ExitBootServices()
      has been invoked.
      
      A liberal sprinkling of comments in the thunking code should make the
      differences in early and late environments more apparent.
      Reported-by: NAndy Lutomirski <luto@amacapital.net>
      Tested-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      96738c69
  10. 27 1月, 2015 1 次提交
  11. 24 11月, 2014 1 次提交
  12. 12 11月, 2014 1 次提交
    • A
      efi/x86: Move x86 back to libstub · 243b6754
      Ard Biesheuvel 提交于
      This reverts commit 84be8805, which itself reverted my original
      attempt to move x86 from #include'ing .c files from across the tree
      to using the EFI stub built as a static library.
      
      The issue that affected the original approach was that splitting
      the implementation into several .o files resulted in the variable
      'efi_early' becoming a global with external linkage, which under
      -fPIC implies that references to it must go through the GOT. However,
      dealing with this additional GOT entry turned out to be troublesome
      on some EFI implementations. (GCC's visibility=hidden attribute is
      supposed to lift this requirement, but it turned out not to work on
      the 32-bit build.)
      
      Instead, use a pure getter function to get a reference to efi_early.
      This approach results in no additional GOT entries being generated,
      so there is no need for any changes in the early GOT handling.
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      243b6754
  13. 02 11月, 2014 2 次提交
  14. 24 9月, 2014 1 次提交
    • M
      Revert "efi/x86: efistub: Move shared dependencies to <asm/efi.h>" · 84be8805
      Matt Fleming 提交于
      This reverts commit f23cf8bd ("efi/x86: efistub: Move shared
      dependencies to <asm/efi.h>") as well as the x86 parts of commit
      f4f75ad5 ("efi: efistub: Convert into static library").
      
      The road leading to these two reverts is long and winding.
      
      The above two commits were merged during the v3.17 merge window and
      turned the common EFI boot stub code into a static library. This
      necessitated making some symbols global in the x86 boot stub which
      introduced new entries into the early boot GOT.
      
      The problem was that we weren't fixing up the newly created GOT entries
      before invoking the EFI boot stub, which sometimes resulted in hangs or
      resets. This failure was reported by Maarten on his Macbook pro.
      
      The proposed fix was commit 9cb0e394 ("x86/efi: Fixup GOT in all
      boot code paths"). However, that caused issues for Linus when booting
      his Sony Vaio Pro 11. It was subsequently reverted in commit
      f3670394.
      
      So that leaves us back with Maarten's Macbook pro not booting.
      
      At this stage in the release cycle the least risky option is to revert
      the x86 EFI boot stub to the pre-merge window code structure where we
      explicitly #include efi-stub-helper.c instead of linking with the static
      library. The arm64 code remains unaffected.
      
      We can take another swing at the x86 parts for v3.18.
      
      Conflicts:
      	arch/x86/include/asm/efi.h
      Tested-by: NJosh Boyer <jwboyer@fedoraproject.org>
      Tested-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Tested-by: Leif Lindholm <leif.lindholm@linaro.org> [arm64]
      Tested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      84be8805
  15. 18 8月, 2014 3 次提交
  16. 19 7月, 2014 1 次提交
    • A
      efi: efistub: Convert into static library · f4f75ad5
      Ard Biesheuvel 提交于
      This patch changes both x86 and arm64 efistub implementations
      from #including shared .c files under drivers/firmware/efi to
      building shared code as a static library.
      
      The x86 code uses a stub built into the boot executable which
      uncompresses the kernel at boot time. In this case, the library is
      linked into the decompressor.
      
      In the arm64 case, the stub is part of the kernel proper so the library
      is linked into the kernel proper as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f4f75ad5
  17. 10 12月, 2013 1 次提交
    • H
      x86, build: Pass in additional -mno-mmx, -mno-sse options · 8b3b005d
      H. Peter Anvin 提交于
      In checkin
      
          5551a34e x86-64, build: Always pass in -mno-sse
      
      we unconditionally added -mno-sse to the main build, to keep newer
      compilers from generating SSE instructions from autovectorization.
      However, this did not extend to the special environments
      (arch/x86/boot, arch/x86/boot/compressed, and arch/x86/realmode/rm).
      Add -mno-sse to the compiler command line for these environments, and
      add -mno-mmx to all the environments as well, as we don't want a
      compiler to generate MMX code either.
      
      This patch also removes a $(cc-option) call for -m32, since we have
      long since stopped supporting compilers too old for the -m32 option,
      and in fact hardcode it in other places in the Makefiles.
      Reported-by: NKevin B. Smith <kevin.b.smith@intel.com>
      Cc: Sunil K. Pandey <sunil.k.pandey@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: H. J. Lu <hjl.tools@gmail.com>
      Link: http://lkml.kernel.org/n/tip-j21wzqv790q834n7yc6g80j1@git.kernel.org
      Cc: <stable@vger.kernel.org> # build fix only
      8b3b005d
  18. 13 10月, 2013 2 次提交
  19. 10 7月, 2013 1 次提交
  20. 17 4月, 2013 2 次提交
  21. 06 4月, 2013 1 次提交
    • J
      x86: Fix rebuild with EFI_STUB enabled · 91870824
      Jan Beulich 提交于
      eboot.o and efi_stub_$(BITS).o didn't get added to "targets", and hence
      their .cmd files don't get included by the build machinery, leading to
      the files always getting rebuilt.
      
      Rather than adding the two files individually, take the opportunity and
      add $(VMLINUX_OBJS) to "targets" instead, thus allowing the assignment
      at the top of the file to be shrunk quite a bit.
      
      At the same time, remove a pointless flags override line - the variable
      assigned to was misspelled anyway, and the options added are
      meaningless for assembly sources.
      
      [ hpa: the patch is not minimal, but I am taking it for -urgent anyway
        since the excess impact of the patch seems to be small enough. ]
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Link: http://lkml.kernel.org/r/515C5D2502000078000CA6AD@nat28.tlf.novell.com
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      91870824
  22. 17 9月, 2012 1 次提交
  23. 19 5月, 2012 1 次提交
    • H
      x86, realmode: 16-bit real-mode code support for relocs tool · 6520fe55
      H. Peter Anvin 提交于
      A new option is added to the relocs tool called '--realmode'.
      This option causes the generation of 16-bit segment relocations
      and 32-bit linear relocations for the real-mode code. When
      the real-mode code is moved to the low-memory during kernel
      initialization, these relocation entries can be used to
      relocate the code properly.
      
      In the assembly code 16-bit segment relocations must be relative
      to the 'real_mode_seg' absolute symbol. Linear relocations must be
      relative to a symbol prefixed with 'pa_'.
      
      16-bit segment relocation is used to load cs:ip in 16-bit code.
      Linear relocations are used in the 32-bit code for relocatable
      data references. They are declared in the linker script of the
      real-mode code.
      
      The relocs tool is moved to arch/x86/tools/relocs.c, and added new
      target archscripts that can be used to build scripts needed building
      an architecture.  be compiled before building the arch/x86 tree.
      
      [ hpa: accelerating this because it detects invalid absolute
        relocations, a serious bug in binutils 2.22.52.0.x which currently
        produces bad kernels. ]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/1336501366-28617-2-git-send-email-jarkko.sakkinen@intel.comSigned-off-by: NJarkko Sakkinen <jarkko.sakkinen@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      6520fe55
  24. 09 5月, 2012 2 次提交
  25. 29 2月, 2012 1 次提交
  26. 13 12月, 2011 1 次提交
    • M
      x86, efi: EFI boot stub support · 291f3632
      Matt Fleming 提交于
      There is currently a large divide between kernel development and the
      development of EFI boot loaders. The idea behind this patch is to give
      the kernel developers full control over the EFI boot process. As
      H. Peter Anvin put it,
      
      "The 'kernel carries its own stub' approach been very successful in
      dealing with BIOS, and would make a lot of sense to me for EFI as
      well."
      
      This patch introduces an EFI boot stub that allows an x86 bzImage to
      be loaded and executed by EFI firmware. The bzImage appears to the
      firmware as an EFI application. Luckily there are enough free bits
      within the bzImage header so that it can masquerade as an EFI
      application, thereby coercing the EFI firmware into loading it and
      jumping to its entry point. The beauty of this masquerading approach
      is that both BIOS and EFI boot loaders can still load and run the same
      bzImage, thereby allowing a single kernel image to work in any boot
      environment.
      
      The EFI boot stub supports multiple initrds, but they must exist on
      the same partition as the bzImage. Command-line arguments for the
      kernel can be appended after the bzImage name when run from the EFI
      shell, e.g.
      
      Shell> bzImage console=ttyS0 root=/dev/sdb initrd=initrd.img
      
      v7:
       - Fix checkpatch warnings.
      
      v6:
      
       - Try to allocate initrd memory just below hdr->inird_addr_max.
      
      v5:
      
       - load_options_size is UTF-16, which needs dividing by 2 to convert
         to the corresponding ASCII size.
      
      v4:
      
       - Don't read more than image->load_options_size
      
      v3:
      
       - Fix following warnings when compiling CONFIG_EFI_STUB=n
      
         arch/x86/boot/tools/build.c: In function ‘main’:
         arch/x86/boot/tools/build.c:138:24: warning: unused variable ‘pe_header’
         arch/x86/boot/tools/build.c:138:15: warning: unused variable ‘file_sz’
      
       - As reported by Matthew Garrett, some Apple machines have GOPs that
         don't have hardware attached. We need to weed these out by
         searching for ones that handle the PCIIO protocol.
      
       - Don't allocate memory if no initrds are on cmdline
       - Don't trust image->load_options_size
      
      Maarten Lankhorst noted:
       - Don't strip first argument when booted from efibootmgr
       - Don't allocate too much memory for cmdline
       - Don't update cmdline_size, the kernel considers it read-only
       - Don't accept '\n' for initrd names
      
      v2:
      
       - File alignment was too large, was 8192 should be 512. Reported by
         Maarten Lankhorst on LKML.
       - Added UGA support for graphics
       - Use VIDEO_TYPE_EFI instead of hard-coded number.
       - Move linelength assignment until after we've assigned depth
       - Dynamically fill out AddressOfEntryPoint in tools/build.c
       - Don't use magic number for GDT/TSS stuff. Requested by Andi Kleen
       - The bzImage may need to be relocated as it may have been loaded at
         a high address address by the firmware. This was required to get my
         macbook booting because the firmware loaded it at 0x7cxxxxxx, which
         triggers this error in decompress_kernel(),
      
      	if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
      		error("Destination address too large");
      
      Cc: Mike Waychison <mikew@google.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Tested-by: NHenrik Rydberg <rydberg@euromail.se>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/1321383097.2657.9.camel@mfleming-mobl1.ger.corp.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      291f3632
  27. 14 1月, 2011 1 次提交
    • L
      x86: support XZ-compressed kernel · 30314804
      Lasse Collin 提交于
      This integrates the XZ decompression code to the x86 pre-boot code.
      
      mkpiggy.c is updated to reserve about 32 KiB more buffer safety margin for
      kernel decompression.  It is done unconditionally for all decompressors to
      keep the code simpler.
      
      The XZ decompressor needs around 30 KiB of heap, so the heap size is
      increased to 32 KiB on both x86-32 and x86-64.
      
      Documentation/x86/boot.txt is updated to list the XZ magic number.
      
      With the x86 BCJ filter in XZ, XZ-compressed x86 kernel tends to be a few
      percent smaller than the equivalent LZMA-compressed kernel.
      Signed-off-by: NLasse Collin <lasse.collin@tukaani.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Alain Knaff <alain@knaff.lu>
      Cc: Albin Tonnerre <albin.tonnerre@free-electrons.com>
      Cc: Phillip Lougher <phillip@lougher.demon.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      30314804
  28. 03 8月, 2010 1 次提交
    • Y
      x86, setup: enable early console output from the decompressor · 8fee13a4
      Yinghai Lu 提交于
      This enables the decompressor output to be seen on the serial console.
      Most of the code is shared with the regular boot code.
      
      We could add printf to the decompressor if needed, but currently there
      is no sufficiently compelling user.
      
      -v2: define BOOT_BOOT_H to avoid include boot.h
      -v3: early_serial_base need to be static in misc.c ?
      -v4: create seperate string.c printf.c cmdline.c early_serial_console.c
           after hpa's patch that allow global variables in compressed/misc stage
      -v5: remove printf.c related
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      8fee13a4
  29. 12 1月, 2010 1 次提交
  30. 26 12月, 2009 1 次提交
    • H
      x86, compress: Force i386 instructions for the decompressor · 17a2a9b5
      H. Peter Anvin 提交于
      Recently, some distros have started shipping versions of gcc which
      default to -march=i686.  This breaks building kernels for pre-i686
      machines, even if they have been selected in Kconfig, due to the
      generation of CMOV instructions.
      
      There isn't enough benefit to try to preserve the generation of these
      instructions even when selected, so simply force -march=i386 for the
      decompressor when building a 32-bit kernel.
      Reported-and-tested-by: NChris Rankin <rankincj@yahoo.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <219280.97558.qm@web52907.mail.re2.yahoo.com>
      17a2a9b5
  31. 21 8月, 2009 1 次提交
  32. 19 6月, 2009 1 次提交
    • P
      gcov: enable GCOV_PROFILE_ALL for x86_64 · 7bf99fb6
      Peter Oberparleiter 提交于
      Enable gcov profiling of the entire kernel on x86_64. Required changes
      include disabling profiling for:
      
      * arch/kernel/acpi/realmode and arch/kernel/boot/compressed:
        not linked to main kernel
      * arch/vdso, arch/kernel/vsyscall_64 and arch/kernel/hpet:
        profiling causes segfaults during boot (incompatible context)
      Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Li Wei <W.Li@Sun.COM>
      Cc: Michael Ellerman <michaele@au1.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Heiko Carstens <heicars2@linux.vnet.ibm.com>
      Cc: Martin Schwidefsky <mschwid2@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: WANG Cong <xiyou.wangcong@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bf99fb6