1. 05 3月, 2014 3 次提交
    • M
      x86/boot: Fix non-EFI build · 3db4cafd
      Matt Fleming 提交于
      The kbuild test robot reported the following errors, introduced with
      commit 54b52d87 ("x86/efi: Build our own EFI services pointer
      table"),
      
       arch/x86/boot/compressed/head_32.o: In function `efi32_config':
      >> (.data+0x58): undefined reference to `efi_call_phys'
      
       arch/x86/boot/compressed/head_64.o: In function `efi64_config':
      >> (.data+0x90): undefined reference to `efi_call6'
      
      Wrap the efi*_config structures in #ifdef CONFIG_EFI_STUB so that we
      don't make references to EFI functions if they're not compiled in.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      3db4cafd
    • M
      x86/efi: Firmware agnostic handover entry points · b8ff87a6
      Matt Fleming 提交于
      The EFI handover code only works if the "bitness" of the firmware and
      the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not
      possible to mix the two. This goes against the tradition that a 32-bit
      kernel can be loaded on a 64-bit BIOS platform without having to do
      anything special in the boot loader. Linux distributions, for one thing,
      regularly run only 32-bit kernels on their live media.
      
      Despite having only one 'handover_offset' field in the kernel header,
      EFI boot loaders use two separate entry points to enter the kernel based
      on the architecture the boot loader was compiled for,
      
          (1) 32-bit loader: handover_offset
          (2) 64-bit loader: handover_offset + 512
      
      Since we already have two entry points, we can leverage them to infer
      the bitness of the firmware we're running on, without requiring any boot
      loader modifications, by making (1) and (2) valid entry points for both
      CONFIG_X86_32 and CONFIG_X86_64 kernels.
      
      To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot
      loader will always use (2). It's just that, if a single kernel image
      supports (1) and (2) that image can be used with both 32-bit and 64-bit
      boot loaders, and hence both 32-bit and 64-bit EFI.
      
      (1) and (2) must be 512 bytes apart at all times, but that is already
      part of the boot ABI and we could never change that delta without
      breaking existing boot loaders anyhow.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      b8ff87a6
    • M
      x86/efi: Build our own EFI services pointer table · 54b52d87
      Matt Fleming 提交于
      It's not possible to dereference the EFI System table directly when
      booting a 64-bit kernel on a 32-bit EFI firmware because the size of
      pointers don't match.
      
      In preparation for supporting the above use case, build a list of
      function pointers on boot so that callers don't have to worry about
      converting pointer sizes through multiple levels of indirection.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      54b52d87
  2. 13 10月, 2013 1 次提交
  3. 08 8月, 2013 1 次提交
  4. 29 1月, 2013 1 次提交
  5. 28 1月, 2013 2 次提交
    • D
      x86, build: Dynamically find entry points in compressed startup code · 99f857db
      David Woodhouse 提交于
      We have historically hard-coded entry points in head.S just so it's easy
      to build the executable/bzImage headers with references to them.
      
      Unfortunately, this leads to boot loaders abusing these "known" addresses
      even when they are *explicitly* told that they "should look at the ELF
      header to find this address, as it may change in the future". And even
      when the address in question *has* actually been changed in the past,
      without fanfare or thought to compatibility.
      
      Thus we have bootloaders doing stunningly broken things like jumping
      to offset 0x200 in the kernel startup code in 64-bit mode, *hoping*
      that startup_64 is still there (it has moved at least once
      before). And hoping that it's actually a 64-bit kernel despite the
      fact that we don't give them any indication of that fact.
      
      This patch should hopefully remove the temptation to abuse internal
      addresses in future, where sternly worded comments have not sufficed.
      Instead of having hard-coded addresses and saying "please don't abuse
      these", we actually pull the addresses out of the ELF payload into
      zoffset.h, and make build.c shove them back into the right places in
      the bzImage header.
      
      Rather than including zoffset.h into build.c and thus having to rebuild
      the tool for every kernel build, we parse it instead. The parsing code
      is small and simple.
      
      This patch doesn't actually move any of the interesting entry points, so
      any offending bootloader will still continue to "work" after this patch
      is applied. For some version of "work" which includes jumping into the
      compressed payload and crashing, if the bzImage it's given is a 32-bit
      kernel. No change there then.
      
      [ hpa: some of the issues in the description are addressed or
        retconned by the 2.12 boot protocol.  This patch has been edited to
        only remove fixed addresses that were *not* thus retconned. ]
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      99f857db
    • D
      x86, efi: Fix 32-bit EFI handover protocol entry point · f791620f
      David Woodhouse 提交于
      If the bootloader calls the EFI handover entry point as a standard function
      call, then it'll have a return address on the stack. We need to pop that
      before calling efi_main(), or the arguments will all be out of position on
      the stack.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Cc: <stable@kernel.org>
      Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Matt Fleming <matt.fleming@intel.com>
      f791620f
  6. 21 7月, 2012 1 次提交
    • M
      x86, efi: Handover Protocol · 9ca8f72a
      Matt Fleming 提交于
      As things currently stand, traditional EFI boot loaders and the EFI
      boot stub are carrying essentially the same initialisation code
      required to setup an EFI machine for booting a kernel. There's really
      no need to have this code in two places and the hope is that, with
      this new protocol, initialisation and booting of the kernel can be
      left solely to the kernel's EFI boot stub. The responsibilities of the
      boot loader then become,
      
         o Loading the kernel image from boot media
      
      File system code still needs to be carried by boot loaders for the
      scenario where the kernel and initrd files reside on a file system
      that the EFI firmware doesn't natively understand, such as ext4, etc.
      
         o Providing a user interface
      
      Boot loaders still need to display any menus/interfaces, for example
      to allow the user to select from a list of kernels.
      
      Bump the boot protocol number because we added the 'handover_offset'
      field to indicate the location of the handover protocol entry point.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Acked-and-Tested-by: NMatthew Garrett <mjg@redhat.com>
      Link: http://lkml.kernel.org/r/1342689828-16815-1-git-send-email-matt@console-pimps.orgSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      9ca8f72a
  7. 17 4月, 2012 1 次提交
  8. 13 12月, 2011 1 次提交
    • M
      x86, efi: EFI boot stub support · 291f3632
      Matt Fleming 提交于
      There is currently a large divide between kernel development and the
      development of EFI boot loaders. The idea behind this patch is to give
      the kernel developers full control over the EFI boot process. As
      H. Peter Anvin put it,
      
      "The 'kernel carries its own stub' approach been very successful in
      dealing with BIOS, and would make a lot of sense to me for EFI as
      well."
      
      This patch introduces an EFI boot stub that allows an x86 bzImage to
      be loaded and executed by EFI firmware. The bzImage appears to the
      firmware as an EFI application. Luckily there are enough free bits
      within the bzImage header so that it can masquerade as an EFI
      application, thereby coercing the EFI firmware into loading it and
      jumping to its entry point. The beauty of this masquerading approach
      is that both BIOS and EFI boot loaders can still load and run the same
      bzImage, thereby allowing a single kernel image to work in any boot
      environment.
      
      The EFI boot stub supports multiple initrds, but they must exist on
      the same partition as the bzImage. Command-line arguments for the
      kernel can be appended after the bzImage name when run from the EFI
      shell, e.g.
      
      Shell> bzImage console=ttyS0 root=/dev/sdb initrd=initrd.img
      
      v7:
       - Fix checkpatch warnings.
      
      v6:
      
       - Try to allocate initrd memory just below hdr->inird_addr_max.
      
      v5:
      
       - load_options_size is UTF-16, which needs dividing by 2 to convert
         to the corresponding ASCII size.
      
      v4:
      
       - Don't read more than image->load_options_size
      
      v3:
      
       - Fix following warnings when compiling CONFIG_EFI_STUB=n
      
         arch/x86/boot/tools/build.c: In function ‘main’:
         arch/x86/boot/tools/build.c:138:24: warning: unused variable ‘pe_header’
         arch/x86/boot/tools/build.c:138:15: warning: unused variable ‘file_sz’
      
       - As reported by Matthew Garrett, some Apple machines have GOPs that
         don't have hardware attached. We need to weed these out by
         searching for ones that handle the PCIIO protocol.
      
       - Don't allocate memory if no initrds are on cmdline
       - Don't trust image->load_options_size
      
      Maarten Lankhorst noted:
       - Don't strip first argument when booted from efibootmgr
       - Don't allocate too much memory for cmdline
       - Don't update cmdline_size, the kernel considers it read-only
       - Don't accept '\n' for initrd names
      
      v2:
      
       - File alignment was too large, was 8192 should be 512. Reported by
         Maarten Lankhorst on LKML.
       - Added UGA support for graphics
       - Use VIDEO_TYPE_EFI instead of hard-coded number.
       - Move linelength assignment until after we've assigned depth
       - Dynamically fill out AddressOfEntryPoint in tools/build.c
       - Don't use magic number for GDT/TSS stuff. Requested by Andi Kleen
       - The bzImage may need to be relocated as it may have been loaded at
         a high address address by the firmware. This was required to get my
         macbook booting because the firmware loaded it at 0x7cxxxxxx, which
         triggers this error in decompress_kernel(),
      
      	if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
      		error("Destination address too large");
      
      Cc: Mike Waychison <mikew@google.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Tested-by: NHenrik Rydberg <rydberg@euromail.se>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/1321383097.2657.9.camel@mfleming-mobl1.ger.corp.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      291f3632
  9. 03 8月, 2010 1 次提交
  10. 19 9月, 2009 1 次提交
  11. 12 5月, 2009 2 次提交
    • H
      x86, boot: make kernel_alignment adjustable; new bzImage fields · 37ba7ab5
      H. Peter Anvin 提交于
      Make the kernel_alignment field adjustable; this allows us to set it
      to a large value (intended to be 16 MB to avoid ZONE_DMA contention,
      memory holes and other weirdness) while a smart bootloader can still
      force a loading at a lesser alignment if absolutely necessary.
      
      Also export pref_address (preferred loading address, corresponding to
      the link-time address) and init_size, the total amount of linear
      memory the kernel will require during initialization.
      
      [ Impact: allows better kernel placement, gives bootloader more info ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      37ba7ab5
    • H
      x86, boot: remove dead code from boot/compressed/head_*.S · 99aa4559
      H. Peter Anvin 提交于
      Remove a couple of lines of dead code from
      arch/x86/boot/compressed/head_*.S; all of these update registers that
      are dead in the current code.
      
      [ Impact: cleanup ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      99aa4559
  12. 09 5月, 2009 7 次提交
    • H
      x86, boot: determine compressed code offset at compile time · 02a884c0
      H. Peter Anvin 提交于
      Determine the compressed code offset (from the kernel runtime address)
      at compile time.  This allows some minor optimizations in
      arch/x86/boot/compressed/head_*.S, but more importantly it makes this
      value available to the build process, which will enable a future patch
      to export the necessary linear memory footprint into the bzImage
      header.
      
      [ Impact: cleanup, future patch enabling ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      02a884c0
    • H
      x86, boot: use appropriate rep string for move and clear · 36d3793c
      H. Peter Anvin 提交于
      In the pre-decompression code, use the appropriate largest possible
      rep movs and rep stos to move code and clear bss, respectively.  For
      reverse copy, do note that the initial values are supposed to be the
      address of the first (highest) copy datum, not one byte beyond the end
      of the buffer.
      
      rep strings are not necessarily the fastest way to perform these
      operations on all current processors, but are likely to be in the
      future, and perhaps more importantly, we want to encourage the
      architecturally right thing to do here.
      
      This also fixes a couple of trivial inefficiencies on 64 bits.
      
      [ Impact: trivial performance enhancement, increase code similarity ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      36d3793c
    • H
      x86, boot: zero EFLAGS on 32 bits · 97541912
      H. Peter Anvin 提交于
      The 64-bit code already clears EFLAGS as soon as it has a stack.  This
      seems like a reasonable precaution, so do it on 32 bits as well.
      
      [ Impact: extra paranoia ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      97541912
    • H
      x86, boot: set up the decompression stack as early as possible · 0a137736
      H. Peter Anvin 提交于
      Set up the decompression stack as soon as we know where it needs to
      go.  That way we have a full-service stack as soon as possible, rather
      than relying on the BP_scratch field.
      
      Note that the stack does need to be empty during bss zeroing (or
      else the stack needs to be moved out of the bss segment, which is also
      an option.)
      
      [ Impact: cleanup, minor paranoia ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      0a137736
    • H
      x86, boot: straighten out ranges to copy/zero in compressed/head*.S · 5b11f1ce
      H. Peter Anvin 提交于
      Both on 32 and 64 bits, we copy all the way up to the end of bss,
      except that on 64 bits there is a hack to avoid copying on top of the
      page tables.  There is no point in copying bss at all, especially
      since we are just about to zero it all anyway.
      
      To clean up and unify the handling, we now do:
      
        - copy from startup_32 to _bss.
        - zero from _bss to _ebss.
        - the _ebss symbol is aligned to an 8-byte boundary.
        - the page tables are moved to a separate section.
      
      Use _bss as the copy endpoint since _edata may be misaligned.
      
      [ Impact: cleanup, trivial performance improvement ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5b11f1ce
    • H
      x86, boot: stylistic cleanups for boot/compressed/head_32.S · 5f64ec64
      H. Peter Anvin 提交于
      Reformat arch/x86/boot/compressed/head_32.S to be closer to currently
      preferred kernel assembly style, that is:
      
      - opcode and operand separated by tab
      - operands separated by ", "
      - C-style comments
      
      This also makes it more similar to head_64.S.
      
      [ Impact: cleanup, no object code change ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5f64ec64
    • H
      x86, boot: use BP_scratch in arch/x86/boot/compressed/head_*.S · bd2a3698
      H. Peter Anvin 提交于
      Use the BP_scratch symbol from asm-offsets.h instead of hard-coding
      the location.
      
      [ Impact: cleanup ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      bd2a3698
  13. 27 4月, 2009 1 次提交
    • L
      x86: unify arch/x86/boot/compressed/vmlinux_*.lds · 51b26ada
      Linus Torvalds 提交于
      Look at the:
      
      	diff -u arch/x86/boot/compressed/vmlinux_*.lds
      
      output and realize that they're basially exactly the same except for
      trivial naming differences, and the fact that the 64-bit version has a
      "pgtable" thing.
      
      So unify them.
      
      There's some trivial cleanup there (make the output format a Kconfig thing
      rather than doing #ifdef's for it, and unify both 32-bit and 64-bit BSS
      end to "_ebss", where 32-bit used to use the traditional "_end"), but
      other than that it's really very mindless and straigt conversion.
      
      For example, I think we should aim to remove "startup_32" vs "startup_64",
      and just call it "startup", and get rid of one more difference. I didn't
      do that.
      
      Also, notice the comment in the unified vmlinux.lds.S talks about
      "head_64" and "startup_32" which is an odd and incorrect mix, but that was
      actually what the old 64-bit only lds file had, so the confusion isn't
      new, and now that mixing is arguably more accurate thanks to the
      vmlinux.lds.S file being shared between the two cases ;)
      
      [ Impact: cleanup, unification ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b26ada
  14. 20 2月, 2009 1 次提交
  15. 14 2月, 2009 1 次提交
  16. 12 8月, 2008 1 次提交
  17. 20 4月, 2008 1 次提交
  18. 28 10月, 2007 1 次提交
  19. 22 10月, 2007 1 次提交
    • R
      i386: paravirt boot sequence · a24e7851
      Rusty Russell 提交于
      This patch uses the updated boot protocol to do paravirtualized boot.
      If the boot version is >= 2.07, then it will do two things:
      
       1. Check the bootparams loadflags to see if we should reload the
          segment registers and clear interrupts.  This is appropriate
          for normal native boot and some paravirtualized environments, but
          inapproprate for others.
      
       2. Check the hardware architecture, and dispatch to the appropriate
          kernel entrypoint.  If the bootloader doesn't set this, then we
          simply do the normal boot sequence.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a24e7851
  20. 11 10月, 2007 2 次提交
  21. 13 7月, 2007 1 次提交
  22. 03 1月, 2007 1 次提交
  23. 07 12月, 2006 3 次提交
    • V
      [PATCH] i386: Implement CONFIG_PHYSICAL_ALIGN · e69f202d
      Vivek Goyal 提交于
      o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN.
        Hardcoding the kernel physical start value creates a problem in relocatable
        kernel context due to boot loader limitations. For ex, if somebody
        compiles a relocatable kernel to be run from address 4MB, but this kernel
        will run from location 1MB as grub loads the kernel at physical address
        1MB. Kernel thinks that I am a relocatable kernel and I should run from
        the address I have been loaded at. So somebody wanting to run kernel
        from 4MB alignment location (for improved performance regions) can't do
        that.
      
      o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make
        more sense in relocatable kernel context. At run time kernel will move
        itself to a physical addr location which meets user specified alignment
        restrictions.
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      e69f202d
    • E
      [PATCH] i386: Relocatable kernel support · 968de4f0
      Eric W. Biederman 提交于
      This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is
      selected it will be able to be loaded at any 4K aligned address below
      1G.  The technique used is to compile the decompressor with -fPIC and
      modify it so the decompressor is fully relocatable.  For the main
      kernel relocations are generated.  Resulting in a kernel that is relocatable
      with no runtime overhead and no need to modify the source code.
      
      A reserved 32bit word in the parameters has been assigned
      to serve as a stack so we figure out where are running.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      968de4f0
    • E
      [PATCH] i386: CONFIG_PHYSICAL_START cleanup · 2a43f3ed
      Eric W. Biederman 提交于
      Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but
      it triggers a full kernel rebuild for the silliest of reasons.  This
      modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h
      which prevents the full rebuild problem, which makes the code much
      more maintainer and hopefully user friendly.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      2a43f3ed
  24. 26 6月, 2005 1 次提交
    • E
      [PATCH] kexec: x86: add CONFIG_PYSICAL_START · 3d345e3f
      Eric W. Biederman 提交于
      For one kernel to report a crash another kernel has created we need
      to have 2 kernels loaded simultaneously in memory.  To accomplish this
      the two kernels need to built to run at different physical addresses.
      
      This patch adds the CONFIG_PHYSICAL_START option to the x86 kernel
      so we can do just that.  You need to know what you are doing and
      the ramifications are before changing this value, and most users
      won't care so I have made it depend on CONFIG_EMBEDDED
      
      bzImage kernels will work and run at a different address when compiled
      with this option but they will still load at 1MB.  If you need a kernel
      loaded at a different address as well you need to boot a vmlinux.
      Signed-off-by: NEric Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3d345e3f
  25. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4