1. 06 4月, 2019 1 次提交
    • G
      x86/build: Specify elf_i386 linker emulation explicitly for i386 objects · d2053718
      George Rimar 提交于
      [ Upstream commit 927185c124d62a9a4d35878d7f6d432a166b74e3 ]
      
      The kernel uses the OUTPUT_FORMAT linker script command in it's linker
      scripts. Most of the time, the -m option is passed to the linker with
      correct architecture, but sometimes (at least for x86_64) the -m option
      contradicts the OUTPUT_FORMAT directive.
      
      Specifically, arch/x86/boot and arch/x86/realmode/rm produce i386 object
      files, but are linked with the -m elf_x86_64 linker flag when building
      for x86_64.
      
      The GNU linker manpage doesn't explicitly state any tie-breakers between
      -m and OUTPUT_FORMAT. But with BFD and Gold linkers, OUTPUT_FORMAT
      overrides the emulation value specified with the -m option.
      
      LLVM lld has a different behavior, however. When supplied with
      contradicting -m and OUTPUT_FORMAT values it fails with the following
      error message:
      
        ld.lld: error: arch/x86/realmode/rm/header.o is incompatible with elf_x86_64
      
      Therefore, just add the correct -m after the incorrect one (it overrides
      it), so the linker invocation looks like this:
      
        ld -m elf_x86_64 -z max-page-size=0x200000 -m elf_i386 --emit-relocs -T \
          realmode.lds header.o trampoline_64.o stack.o reboot.o -o realmode.elf
      
      This is not a functional change for GNU ld, because (although not
      explicitly documented) OUTPUT_FORMAT overrides -m EMULATION.
      
      Tested by building x86_64 kernel with GNU gcc/ld toolchain and booting
      it in QEMU.
      
       [ bp: massage and clarify text. ]
      Suggested-by: NDmitry Golovin <dima@golovin.in>
      Signed-off-by: NGeorge Rimar <grimar@accesssoftek.com>
      Signed-off-by: NTri Vo <trong@android.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NTri Vo <trong@android.com>
      Tested-by: NNick Desaulniers <ndesaulniers@google.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Michael Matz <matz@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: morbo@google.com
      Cc: ndesaulniers@google.com
      Cc: ruiu@google.com
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/20190111201012.71210-1-trong@android.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      d2053718
  2. 07 11月, 2017 1 次提交
    • C
      x86/build: Factor out fdimage/isoimage generation commands to standalone script · 4366d57a
      Changbin Du 提交于
      The build messages for fdimage/isoimage generation are pretty unstructured,
      just the raw shell command blocks are printed.
      
      Emit shortened messages similar to existing kbuild messages, and move
      the Makefile commands into a separate shell script - which is much
      easier to handle.
      
      This patch factors out the commands used for fdimage/isoimage generation
      from arch/x86/boot/Makefile to a new script arch/x86/boot/genimage.sh.
      Then it adds the new kbuild command 'genimage' which invokes the new script.
      All fdimages/isoimage files are now generated by a call to 'genimage' with
      different parameters.
      
      Now 'make isoimage' becomes:
      
      	...
      	Kernel: arch/x86/boot/bzImage is ready  (#30)
      	  GENIMAGE arch/x86/boot/image.iso
      	Size of boot image is 4 sectors -> No emulation
      	 15.37% done, estimate finish Sun Nov  5 23:36:57 2017
      	 30.68% done, estimate finish Sun Nov  5 23:36:57 2017
      	 46.04% done, estimate finish Sun Nov  5 23:36:57 2017
      	 61.35% done, estimate finish Sun Nov  5 23:36:57 2017
      	 76.69% done, estimate finish Sun Nov  5 23:36:57 2017
      	 92.00% done, estimate finish Sun Nov  5 23:36:57 2017
      	Total translation table size: 2048
      	Total rockridge attributes bytes: 659
      	Total directory bytes: 0
      	Path table size(bytes): 10
      	Max brk space used 0
      	32608 extents written (63 MB)
      	Kernel: arch/x86/boot/image.iso is ready
      
      Before:
      
      	Kernel: arch/x86/boot/bzImage is ready  (#63)
      	rm -rf arch/x86/boot/isoimage
      	mkdir arch/x86/boot/isoimage
      	for i in lib lib64 share end ; do \
      		if [ -f /usr/$i/syslinux/isolinux.bin ] ; then \
      			cp /usr/$i/syslinux/isolinux.bin arch/x86/boot/isoimage ; \
      			if [ -f /usr/$i/syslinux/ldlinux.c32 ]; then \
      				cp /usr/$i/syslinux/ldlinux.c32 arch/x86/boot/isoimage ; \
      			fi ; \
      			break ; \
      		fi ; \
      		if [ $i = end ] ; then exit 1 ; fi ; \
      	done
      	...
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1509939179-7556-2-git-send-email-changbin.du@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4366d57a
  3. 20 7月, 2017 1 次提交
  4. 07 11月, 2016 1 次提交
  5. 19 7月, 2016 1 次提交
    • A
      Kbuild: arch: look for generated headers in obtree · 58ab5e0c
      Arnd Bergmann 提交于
      There are very few files that need add an -I$(obj) gcc for the preprocessor
      or the assembler. For C files, we add always these for both the objtree and
      srctree, but for the other ones we require the Makefile to add them, and
      Kbuild then adds it for both trees.
      
      As a preparation for changing the meaning of the -I$(obj) directive to
      only refer to the srctree, this changes the two instances in arch/x86 to use
      an explictit $(objtree) prefix where needed, otherwise we won't find the
      headers any more, as reported by the kbuild 0day builder.
      
      arch/x86/realmode/rm/realmode.lds.S:75:20: fatal error: pasyms.h: No such file or directory
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMichal Marek <mmarek@suse.com>
      58ab5e0c
  6. 08 6月, 2016 1 次提交
  7. 29 4月, 2016 2 次提交
    • Y
      x86/boot: Fix "run_size" calculation · 67b66625
      Yinghai Lu 提交于
      Currently, the "run_size" variable holds the total kernel size
      (size of code plus brk and bss) and is calculated via the shell script
      arch/x86/tools/calc_run_size.sh. It gets the file offset and mem size
      of the .bss and .brk sections from the vmlinux, and adds them as follows:
      
        run_size = $(( $offsetA + $sizeA + $sizeB ))
      
      However, this is not correct (it is too large). To illustrate, here's
      a walk-through of the script's calculation, compared to the correct way
      to find it.
      
      First, offsetA is found as the starting address of the first .bss or
      .brk section seen in the ELF file. The sizeA and sizeB values are the
      respective section sizes.
      
       [bhe@x1 linux]$ objdump -h vmlinux
      
       vmlinux:     file format elf64-x86-64
      
       Sections:
       Idx Name    Size      VMA               LMA               File off  Algn
        27 .bss    00170000  ffffffff81ec8000  0000000001ec8000  012c8000  2**12
                   ALLOC
        28 .brk    00027000  ffffffff82038000  0000000002038000  012c8000  2**0
                   ALLOC
      
      Here, offsetA is 0x012c8000, with sizeA at 0x00170000 and sizeB at
      0x00027000. The resulting run_size is 0x145f000:
      
       0x012c8000 + 0x00170000 + 0x00027000 = 0x145f000
      
      However, if we instead examine the ELF LOAD program headers, we see a
      different picture.
      
       [bhe@x1 linux]$ readelf -l vmlinux
      
       Elf file type is EXEC (Executable file)
       Entry point 0x1000000
       There are 5 program headers, starting at offset 64
      
       Program Headers:
        Type        Offset             VirtAddr           PhysAddr
                    FileSiz            MemSiz              Flags  Align
        LOAD        0x0000000000200000 0xffffffff81000000 0x0000000001000000
                    0x0000000000b5e000 0x0000000000b5e000  R E    200000
        LOAD        0x0000000000e00000 0xffffffff81c00000 0x0000000001c00000
                    0x0000000000145000 0x0000000000145000  RW     200000
        LOAD        0x0000000001000000 0x0000000000000000 0x0000000001d45000
                    0x0000000000018158 0x0000000000018158  RW     200000
        LOAD        0x000000000115e000 0xffffffff81d5e000 0x0000000001d5e000
                    0x000000000016a000 0x0000000000301000  RWE    200000
        NOTE        0x000000000099bcac 0xffffffff8179bcac 0x000000000179bcac
                    0x00000000000001bc 0x00000000000001bc         4
      
       Section to Segment mapping:
        Segment Sections...
         00     .text .notes __ex_table .rodata __bug_table .pci_fixup .tracedata
                __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param
                __modver
         01     .data .vvar
         02     .data..percpu
         03     .init.text .init.data .x86_cpu_dev.init .parainstructions
                .altinstructions .altinstr_replacement .iommu_table .apicdrivers
                .exit.text .smp_locks .bss .brk
         04     .notes
      
      As mentioned, run_size needs to be the size of the running kernel
      including .bss and .brk. We can see from the Section/Segment mapping
      above that .bss and .brk are included in segment 03 (which corresponds
      to the final LOAD program header). To find the run_size, we calculate
      the end of the LOAD segment from its PhysAddr start (0x0000000001d5e000)
      and its MemSiz (0x0000000000301000), minus the physical load address of
      the kernel (the first LOAD segment's PhysAddr: 0x0000000001000000). The
      resulting run_size is 0x105f000:
      
       0x0000000001d5e000 + 0x0000000000301000 - 0x0000000001000000 = 0x105f000
      
      So, from this we can see that the existing run_size calculation is
      0x400000 too high. And, as it turns out, the correct run_size is
      actually equal to VO_end - VO_text, which is certainly easier to calculate.
      _end: 0xffffffff8205f000
      _text:0xffffffff81000000
      
       0xffffffff8205f000 - 0xffffffff81000000 = 0x105f000
      
      As a result, run_size is a simple constant, so we don't need to pass it
      around; we already have voffset.h for such things. We can share voffset.h
      between misc.c and header.S instead of getting run_size in other ways.
      This patch moves voffset.h creation code to boot/compressed/Makefile,
      and switches misc.c to use the VO_end - VO_text calculation for run_size.
      
      Dependence before:
      
       boot/header.S ==> boot/voffset.h ==> vmlinux
       boot/header.S ==> compressed/vmlinux ==> compressed/misc.c
      
      Dependence after:
      
       boot/header.S ==> compressed/vmlinux ==> compressed/misc.c ==> boot/voffset.h ==> vmlinux
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      [ Rewrote the changelog. ]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: lasse.collin@tukaani.org
      Fixes: e6023367 ("x86, kaslr: Prevent .bss from overlaping initrd")
      Link: http://lkml.kernel.org/r/1461888548-32439-5-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      67b66625
    • Y
      x86/boot: Calculate decompression size during boot not build · d607251b
      Yinghai Lu 提交于
      Currently z_extract_offset is calculated in boot/compressed/mkpiggy.c.
      This doesn't work well because mkpiggy.c doesn't know the details of the
      decompressor in use. As a result, it can only make an estimation, which
      has risks:
      
       - output + output_len (VO) could be much bigger than input + input_len
         (ZO). In this case, the decompressed kernel plus relocs could overwrite
         the decompression code while it is running.
      
       - The head code of ZO could be bigger than z_extract_offset. In this case
         an overwrite could happen when the head code is running to move ZO to
         the end of buffer. Though currently the size of the head code is very
         small it's still a potential risk. Since there is no rule to limit the
         size of the head code of ZO, it runs the risk of suddenly becoming a
         (hard to find) bug.
      
      Instead, this moves the z_extract_offset calculation into header.S, and
      makes adjustments to be sure that the above two cases can never happen,
      and further corrects the comments describing the calculations.
      
      Since we have (in the previous patch) made ZO always be located against
      the end of decompression buffer, z_extract_offset is only used here to
      calculate an appropriate buffer size (INIT_SIZE), and is not longer used
      elsewhere. As such, it can be removed from voffset.h.
      
      Additionally clean up #if/#else #define to improve readability.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      [ Rewrote the changelog and comments. ]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: lasse.collin@tukaani.org
      Link: http://lkml.kernel.org/r/1461888548-32439-4-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d607251b
  8. 23 3月, 2016 1 次提交
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  9. 29 2月, 2016 1 次提交
    • J
      objtool: Mark non-standard object files and directories · c0dd6716
      Josh Poimboeuf 提交于
      Code which runs outside the kernel's normal mode of operation often does
      unusual things which can cause a static analysis tool like objtool to
      emit false positive warnings:
      
       - boot image
       - vdso image
       - relocation
       - realmode
       - efi
       - head
       - purgatory
       - modpost
      
      Set OBJECT_FILES_NON_STANDARD for their related files and directories,
      which will tell objtool to skip checking them.  It's ok to skip them
      because they don't affect runtime stack traces.
      
      Also skip the following code which does the right thing with respect to
      frame pointers, but is too "special" to be validated by a tool:
      
       - entry
       - mcount
      
      Also skip the test_nx module because it modifies its exception handling
      table at runtime, which objtool can't understand.  Fortunately it's
      just a test module so it doesn't matter much.
      
      Currently objtool is the only user of OBJECT_FILES_NON_STANDARD, but it
      might eventually be useful for other tools.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/366c080e3844e8a5b6a0327dc7e8c2b90ca3baeb.1456719558.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c0dd6716
  10. 21 1月, 2016 1 次提交
  11. 06 11月, 2015 1 次提交
  12. 21 7月, 2015 1 次提交
  13. 14 2月, 2015 1 次提交
    • A
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
  14. 23 12月, 2014 1 次提交
  15. 18 8月, 2014 1 次提交
    • J
      x86: Support compiling out human-friendly processor feature names · 9def39be
      Josh Triplett 提交于
      The table mapping CPUID bits to human-readable strings takes up a
      non-trivial amount of space, and only exists to support /proc/cpuinfo
      and a couple of kernel messages.  Since programs depend on the format of
      /proc/cpuinfo, force inclusion of the table when building with /proc
      support; otherwise, support omitting that table to save space, in which
      case the kernel messages will print features numerically instead.
      
      In addition to saving 1408 bytes out of vmlinux, this also saves 1373
      bytes out of the uncompressed setup code, which contributes directly to
      the size of bzImage.
      Signed-off-by: NJosh Triplett <josh@joshtriplett.org>
      9def39be
  16. 06 5月, 2014 1 次提交
  17. 05 3月, 2014 1 次提交
    • M
      x86/efi: Firmware agnostic handover entry points · b8ff87a6
      Matt Fleming 提交于
      The EFI handover code only works if the "bitness" of the firmware and
      the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not
      possible to mix the two. This goes against the tradition that a 32-bit
      kernel can be loaded on a 64-bit BIOS platform without having to do
      anything special in the boot loader. Linux distributions, for one thing,
      regularly run only 32-bit kernels on their live media.
      
      Despite having only one 'handover_offset' field in the kernel header,
      EFI boot loaders use two separate entry points to enter the kernel based
      on the architecture the boot loader was compiled for,
      
          (1) 32-bit loader: handover_offset
          (2) 64-bit loader: handover_offset + 512
      
      Since we already have two entry points, we can leverage them to infer
      the bitness of the firmware we're running on, without requiring any boot
      loader modifications, by making (1) and (2) valid entry points for both
      CONFIG_X86_32 and CONFIG_X86_64 kernels.
      
      To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot
      loader will always use (2). It's just that, if a single kernel image
      supports (1) and (2) that image can be used with both 32-bit and 64-bit
      boot loaders, and hence both 32-bit and 64-bit EFI.
      
      (1) and (2) must be 512 bytes apart at all times, but that is already
      part of the boot ABI and we could never change that delta without
      breaking existing boot loaders anyhow.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      b8ff87a6
  18. 22 1月, 2014 1 次提交
  19. 10 12月, 2013 1 次提交
    • H
      x86, build: Pass in additional -mno-mmx, -mno-sse options · 8b3b005d
      H. Peter Anvin 提交于
      In checkin
      
          5551a34e x86-64, build: Always pass in -mno-sse
      
      we unconditionally added -mno-sse to the main build, to keep newer
      compilers from generating SSE instructions from autovectorization.
      However, this did not extend to the special environments
      (arch/x86/boot, arch/x86/boot/compressed, and arch/x86/realmode/rm).
      Add -mno-sse to the compiler command line for these environments, and
      add -mno-mmx to all the environments as well, as we don't want a
      compiler to generate MMX code either.
      
      This patch also removes a $(cc-option) call for -m32, since we have
      long since stopped supporting compilers too old for the -m32 option,
      and in fact hardcode it in other places in the Makefiles.
      Reported-by: NKevin B. Smith <kevin.b.smith@intel.com>
      Cc: Sunil K. Pandey <sunil.k.pandey@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: H. J. Lu <hjl.tools@gmail.com>
      Link: http://lkml.kernel.org/n/tip-j21wzqv790q834n7yc6g80j1@git.kernel.org
      Cc: <stable@vger.kernel.org> # build fix only
      8b3b005d
  20. 13 10月, 2013 1 次提交
  21. 27 9月, 2013 1 次提交
  22. 28 1月, 2013 1 次提交
    • D
      x86, build: Dynamically find entry points in compressed startup code · 99f857db
      David Woodhouse 提交于
      We have historically hard-coded entry points in head.S just so it's easy
      to build the executable/bzImage headers with references to them.
      
      Unfortunately, this leads to boot loaders abusing these "known" addresses
      even when they are *explicitly* told that they "should look at the ELF
      header to find this address, as it may change in the future". And even
      when the address in question *has* actually been changed in the past,
      without fanfare or thought to compatibility.
      
      Thus we have bootloaders doing stunningly broken things like jumping
      to offset 0x200 in the kernel startup code in 64-bit mode, *hoping*
      that startup_64 is still there (it has moved at least once
      before). And hoping that it's actually a 64-bit kernel despite the
      fact that we don't give them any indication of that fact.
      
      This patch should hopefully remove the temptation to abuse internal
      addresses in future, where sternly worded comments have not sufficed.
      Instead of having hard-coded addresses and saying "please don't abuse
      these", we actually pull the addresses out of the ELF payload into
      zoffset.h, and make build.c shove them back into the right places in
      the bzImage header.
      
      Rather than including zoffset.h into build.c and thus having to rebuild
      the tool for every kernel build, we parse it instead. The parsing code
      is small and simple.
      
      This patch doesn't actually move any of the interesting entry points, so
      any offending bootloader will still continue to "work" after this patch
      is applied. For some version of "work" which includes jumping into the
      compressed payload and crashing, if the bzImage it's given is a 32-bit
      kernel. No change there then.
      
      [ hpa: some of the issues in the description are addressed or
        retconned by the 2.12 boot protocol.  This patch has been edited to
        only remove fixed addresses that were *not* thus retconned. ]
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      99f857db
  23. 15 10月, 2012 1 次提交
  24. 03 10月, 2012 1 次提交
  25. 11 8月, 2012 1 次提交
  26. 23 3月, 2012 1 次提交
  27. 29 2月, 2012 1 次提交
  28. 26 5月, 2011 1 次提交
  29. 03 8月, 2010 1 次提交
  30. 19 6月, 2009 1 次提交
    • P
      gcov: enable GCOV_PROFILE_ALL for x86_64 · 7bf99fb6
      Peter Oberparleiter 提交于
      Enable gcov profiling of the entire kernel on x86_64. Required changes
      include disabling profiling for:
      
      * arch/kernel/acpi/realmode and arch/kernel/boot/compressed:
        not linked to main kernel
      * arch/vdso, arch/kernel/vsyscall_64 and arch/kernel/hpet:
        profiling causes segfaults during boot (incompatible context)
      Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Li Wei <W.Li@Sun.COM>
      Cc: Michael Ellerman <michaele@au1.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Heiko Carstens <heicars2@linux.vnet.ibm.com>
      Cc: Martin Schwidefsky <mschwid2@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: WANG Cong <xiyou.wangcong@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bf99fb6
  31. 21 5月, 2009 1 次提交
  32. 12 5月, 2009 1 次提交
  33. 10 4月, 2009 1 次提交
    • H
      x86, setup: "glove box" BIOS calls -- infrastructure · 7a734e7d
      H. Peter Anvin 提交于
      Impact: new interfaces (not yet used)
      
      For all the platforms out there, there is an infinite number of buggy
      BIOSes.  This adds infrastructure to treat BIOS interrupts more like
      toxic waste and "glove box" them -- we switch out the register set,
      perform the BIOS interrupt, and then restore the previous state.
      
      LKML-Reference: <49DE7F79.4030106@zytor.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      7a734e7d
  34. 03 4月, 2009 1 次提交
  35. 13 3月, 2009 2 次提交
  36. 12 3月, 2009 1 次提交
    • H
      x86: remove zImage support · 5e47c478
      H. Peter Anvin 提交于
      Impact: obsolete feature removal
      
      The zImage kernel format has been functionally unused for a very long
      time.  It is just barely possible to build a modern kernel that still
      fits within the zImage size limit, but it is highly unlikely that
      anyone ever uses it.  Furthermore, although it is still supported by
      most bootloaders, it has been at best poorly tested (or not tested at
      all); some bootloaders are even known to not support zImage at all and
      not having even noticed.
      
      Also remove some really obsolete constants that no longer have any
      meaning.
      
      LKML-Reference: <49B703D4.1000008@zytor.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5e47c478
  37. 23 2月, 2009 1 次提交
    • I
      x86: remove the Voyager 32-bit subarch · 965c7eca
      Ingo Molnar 提交于
      Impact: remove unused/broken code
      
      The Voyager subarch last built successfully on the v2.6.26 kernel
      and has been stale since then and does not build on the v2.6.27,
      v2.6.28 and v2.6.29-rc5 kernels.
      
      No actual users beyond the maintainer reported this breakage.
      Patches were sent and most of the fixes were accepted but the
      discussion around how to do a few remaining issues cleanly
      fizzled out with no resolution and the code remained broken.
      
      In the v2.6.30 x86 tree development cycle 32-bit subarch support
      has been reworked and removed - and the Voyager code, beyond the
      build problems already known, needs serious and significant
      changes and probably a rewrite to support it.
      
      CONFIG_X86_VOYAGER has been marked BROKEN then. The maintainer has
      been notified but no patches have been sent so far to fix it.
      
      While all other subarchs have been converted to the new scheme,
      voyager is still broken. We'd prefer to receive patches which
      clean up the current situation in a constructive way, but even in
      case of removal there is no obstacle to add that support back
      after the issues have been sorted out in a mutually acceptable
      fashion.
      
      So remove this inactive code for now.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      965c7eca
  38. 05 10月, 2008 1 次提交