1. 19 9月, 2009 1 次提交
  2. 12 5月, 2009 2 次提交
    • H
      x86, boot: make kernel_alignment adjustable; new bzImage fields · 37ba7ab5
      H. Peter Anvin 提交于
      Make the kernel_alignment field adjustable; this allows us to set it
      to a large value (intended to be 16 MB to avoid ZONE_DMA contention,
      memory holes and other weirdness) while a smart bootloader can still
      force a loading at a lesser alignment if absolutely necessary.
      
      Also export pref_address (preferred loading address, corresponding to
      the link-time address) and init_size, the total amount of linear
      memory the kernel will require during initialization.
      
      [ Impact: allows better kernel placement, gives bootloader more info ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      37ba7ab5
    • H
      x86, boot: remove dead code from boot/compressed/head_*.S · 99aa4559
      H. Peter Anvin 提交于
      Remove a couple of lines of dead code from
      arch/x86/boot/compressed/head_*.S; all of these update registers that
      are dead in the current code.
      
      [ Impact: cleanup ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      99aa4559
  3. 09 5月, 2009 7 次提交
    • H
      x86, boot: determine compressed code offset at compile time · 02a884c0
      H. Peter Anvin 提交于
      Determine the compressed code offset (from the kernel runtime address)
      at compile time.  This allows some minor optimizations in
      arch/x86/boot/compressed/head_*.S, but more importantly it makes this
      value available to the build process, which will enable a future patch
      to export the necessary linear memory footprint into the bzImage
      header.
      
      [ Impact: cleanup, future patch enabling ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      02a884c0
    • H
      x86, boot: use appropriate rep string for move and clear · 36d3793c
      H. Peter Anvin 提交于
      In the pre-decompression code, use the appropriate largest possible
      rep movs and rep stos to move code and clear bss, respectively.  For
      reverse copy, do note that the initial values are supposed to be the
      address of the first (highest) copy datum, not one byte beyond the end
      of the buffer.
      
      rep strings are not necessarily the fastest way to perform these
      operations on all current processors, but are likely to be in the
      future, and perhaps more importantly, we want to encourage the
      architecturally right thing to do here.
      
      This also fixes a couple of trivial inefficiencies on 64 bits.
      
      [ Impact: trivial performance enhancement, increase code similarity ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      36d3793c
    • H
      x86, boot: zero EFLAGS on 32 bits · 97541912
      H. Peter Anvin 提交于
      The 64-bit code already clears EFLAGS as soon as it has a stack.  This
      seems like a reasonable precaution, so do it on 32 bits as well.
      
      [ Impact: extra paranoia ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      97541912
    • H
      x86, boot: set up the decompression stack as early as possible · 0a137736
      H. Peter Anvin 提交于
      Set up the decompression stack as soon as we know where it needs to
      go.  That way we have a full-service stack as soon as possible, rather
      than relying on the BP_scratch field.
      
      Note that the stack does need to be empty during bss zeroing (or
      else the stack needs to be moved out of the bss segment, which is also
      an option.)
      
      [ Impact: cleanup, minor paranoia ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      0a137736
    • H
      x86, boot: straighten out ranges to copy/zero in compressed/head*.S · 5b11f1ce
      H. Peter Anvin 提交于
      Both on 32 and 64 bits, we copy all the way up to the end of bss,
      except that on 64 bits there is a hack to avoid copying on top of the
      page tables.  There is no point in copying bss at all, especially
      since we are just about to zero it all anyway.
      
      To clean up and unify the handling, we now do:
      
        - copy from startup_32 to _bss.
        - zero from _bss to _ebss.
        - the _ebss symbol is aligned to an 8-byte boundary.
        - the page tables are moved to a separate section.
      
      Use _bss as the copy endpoint since _edata may be misaligned.
      
      [ Impact: cleanup, trivial performance improvement ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5b11f1ce
    • H
      x86, boot: stylistic cleanups for boot/compressed/head_32.S · 5f64ec64
      H. Peter Anvin 提交于
      Reformat arch/x86/boot/compressed/head_32.S to be closer to currently
      preferred kernel assembly style, that is:
      
      - opcode and operand separated by tab
      - operands separated by ", "
      - C-style comments
      
      This also makes it more similar to head_64.S.
      
      [ Impact: cleanup, no object code change ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5f64ec64
    • H
      x86, boot: use BP_scratch in arch/x86/boot/compressed/head_*.S · bd2a3698
      H. Peter Anvin 提交于
      Use the BP_scratch symbol from asm-offsets.h instead of hard-coding
      the location.
      
      [ Impact: cleanup ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      bd2a3698
  4. 27 4月, 2009 1 次提交
    • L
      x86: unify arch/x86/boot/compressed/vmlinux_*.lds · 51b26ada
      Linus Torvalds 提交于
      Look at the:
      
      	diff -u arch/x86/boot/compressed/vmlinux_*.lds
      
      output and realize that they're basially exactly the same except for
      trivial naming differences, and the fact that the 64-bit version has a
      "pgtable" thing.
      
      So unify them.
      
      There's some trivial cleanup there (make the output format a Kconfig thing
      rather than doing #ifdef's for it, and unify both 32-bit and 64-bit BSS
      end to "_ebss", where 32-bit used to use the traditional "_end"), but
      other than that it's really very mindless and straigt conversion.
      
      For example, I think we should aim to remove "startup_32" vs "startup_64",
      and just call it "startup", and get rid of one more difference. I didn't
      do that.
      
      Also, notice the comment in the unified vmlinux.lds.S talks about
      "head_64" and "startup_32" which is an odd and incorrect mix, but that was
      actually what the old 64-bit only lds file had, so the confusion isn't
      new, and now that mixing is arguably more accurate thanks to the
      vmlinux.lds.S file being shared between the two cases ;)
      
      [ Impact: cleanup, unification ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b26ada
  5. 20 2月, 2009 1 次提交
  6. 14 2月, 2009 1 次提交
  7. 12 8月, 2008 1 次提交
  8. 20 4月, 2008 1 次提交
  9. 28 10月, 2007 1 次提交
  10. 22 10月, 2007 1 次提交
    • R
      i386: paravirt boot sequence · a24e7851
      Rusty Russell 提交于
      This patch uses the updated boot protocol to do paravirtualized boot.
      If the boot version is >= 2.07, then it will do two things:
      
       1. Check the bootparams loadflags to see if we should reload the
          segment registers and clear interrupts.  This is appropriate
          for normal native boot and some paravirtualized environments, but
          inapproprate for others.
      
       2. Check the hardware architecture, and dispatch to the appropriate
          kernel entrypoint.  If the bootloader doesn't set this, then we
          simply do the normal boot sequence.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Zachary Amsden <zach@vmware.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a24e7851
  11. 11 10月, 2007 2 次提交
  12. 13 7月, 2007 1 次提交
  13. 03 1月, 2007 1 次提交
  14. 07 12月, 2006 3 次提交
    • V
      [PATCH] i386: Implement CONFIG_PHYSICAL_ALIGN · e69f202d
      Vivek Goyal 提交于
      o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN.
        Hardcoding the kernel physical start value creates a problem in relocatable
        kernel context due to boot loader limitations. For ex, if somebody
        compiles a relocatable kernel to be run from address 4MB, but this kernel
        will run from location 1MB as grub loads the kernel at physical address
        1MB. Kernel thinks that I am a relocatable kernel and I should run from
        the address I have been loaded at. So somebody wanting to run kernel
        from 4MB alignment location (for improved performance regions) can't do
        that.
      
      o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make
        more sense in relocatable kernel context. At run time kernel will move
        itself to a physical addr location which meets user specified alignment
        restrictions.
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      e69f202d
    • E
      [PATCH] i386: Relocatable kernel support · 968de4f0
      Eric W. Biederman 提交于
      This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is
      selected it will be able to be loaded at any 4K aligned address below
      1G.  The technique used is to compile the decompressor with -fPIC and
      modify it so the decompressor is fully relocatable.  For the main
      kernel relocations are generated.  Resulting in a kernel that is relocatable
      with no runtime overhead and no need to modify the source code.
      
      A reserved 32bit word in the parameters has been assigned
      to serve as a stack so we figure out where are running.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      968de4f0
    • E
      [PATCH] i386: CONFIG_PHYSICAL_START cleanup · 2a43f3ed
      Eric W. Biederman 提交于
      Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but
      it triggers a full kernel rebuild for the silliest of reasons.  This
      modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h
      which prevents the full rebuild problem, which makes the code much
      more maintainer and hopefully user friendly.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      2a43f3ed
  15. 26 6月, 2005 1 次提交
    • E
      [PATCH] kexec: x86: add CONFIG_PYSICAL_START · 3d345e3f
      Eric W. Biederman 提交于
      For one kernel to report a crash another kernel has created we need
      to have 2 kernels loaded simultaneously in memory.  To accomplish this
      the two kernels need to built to run at different physical addresses.
      
      This patch adds the CONFIG_PHYSICAL_START option to the x86 kernel
      so we can do just that.  You need to know what you are doing and
      the ramifications are before changing this value, and most users
      won't care so I have made it depend on CONFIG_EMBEDDED
      
      bzImage kernels will work and run at a different address when compiled
      with this option but they will still load at 1MB.  If you need a kernel
      loaded at a different address as well you need to boot a vmlinux.
      Signed-off-by: NEric Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3d345e3f
  16. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4