1. 03 8月, 2010 1 次提交
  2. 03 3月, 2010 1 次提交
  3. 19 11月, 2009 1 次提交
    • J
      x86: Eliminate redundant/contradicting cache line size config options · 350f8f56
      Jan Beulich 提交于
      Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT
      (with inconsistent defaults), just having the latter suffices as
      the former can be easily calculated from it.
      
      To be consistent, also change X86_INTERNODE_CACHE_BYTES to
      X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA
      to account for last level cache line size (which here matters
      more than L1 cache line size).
      
      Finally, make sure the default value for X86_L1_CACHE_SHIFT,
      when X86_GENERIC is selected, is being seen before that for the
      individual CPU model options (other than on x86-64, where
      GENERIC_CPU is part of the choice construct, X86_GENERIC is a
      separate option on ix86).
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Acked-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Acked-by: NNick Piggin <npiggin@suse.de>
      LKML-Reference: <4AFD5710020000780001F8F0@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      350f8f56
  4. 19 9月, 2009 1 次提交
  5. 09 5月, 2009 2 次提交
    • H
      x86, boot: straighten out ranges to copy/zero in compressed/head*.S · 5b11f1ce
      H. Peter Anvin 提交于
      Both on 32 and 64 bits, we copy all the way up to the end of bss,
      except that on 64 bits there is a hack to avoid copying on top of the
      page tables.  There is no point in copying bss at all, especially
      since we are just about to zero it all anyway.
      
      To clean up and unify the handling, we now do:
      
        - copy from startup_32 to _bss.
        - zero from _bss to _ebss.
        - the _ebss symbol is aligned to an 8-byte boundary.
        - the page tables are moved to a separate section.
      
      Use _bss as the copy endpoint since _edata may be misaligned.
      
      [ Impact: cleanup, trivial performance improvement ]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5b11f1ce
    • H
      x86, boot: align the .bss section in the decompressor · 0b4eb462
      H. Peter Anvin 提交于
      Aligning the .bss section makes it trivial to use large operation
      sizes for moving the initialized sections and clearing the .bss.
      The alignment chosen (L1 cache) is somewhat arbitrary, but should be
      large enough to avoid all known performance traps and small enough to
      not cause troubles.
      
      [ Impact: trivial performance enhancement, future patch prep	]
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      0b4eb462
  6. 30 4月, 2009 1 次提交
  7. 27 4月, 2009 1 次提交
    • L
      x86: unify arch/x86/boot/compressed/vmlinux_*.lds · 51b26ada
      Linus Torvalds 提交于
      Look at the:
      
      	diff -u arch/x86/boot/compressed/vmlinux_*.lds
      
      output and realize that they're basially exactly the same except for
      trivial naming differences, and the fact that the 64-bit version has a
      "pgtable" thing.
      
      So unify them.
      
      There's some trivial cleanup there (make the output format a Kconfig thing
      rather than doing #ifdef's for it, and unify both 32-bit and 64-bit BSS
      end to "_ebss", where 32-bit used to use the traditional "_end"), but
      other than that it's really very mindless and straigt conversion.
      
      For example, I think we should aim to remove "startup_32" vs "startup_64",
      and just call it "startup", and get rid of one more difference. I didn't
      do that.
      
      Also, notice the comment in the unified vmlinux.lds.S talks about
      "head_64" and "startup_32" which is an odd and incorrect mix, but that was
      actually what the old 64-bit only lds file had, so the confusion isn't
      new, and now that mixing is arguably more accurate thanks to the
      vmlinux.lds.S file being shared between the two cases ;)
      
      [ Impact: cleanup, unification ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b26ada
  8. 20 4月, 2008 1 次提交
  9. 02 2月, 2008 1 次提交
  10. 30 1月, 2008 2 次提交
  11. 11 10月, 2007 2 次提交
  12. 03 5月, 2007 1 次提交
    • V
      [PATCH] x86-64: Relocatable Kernel Support · 1ab60e0f
      Vivek Goyal 提交于
      This patch modifies the x86_64 kernel so that it can be loaded and run
      at any 2M aligned address, below 512G.  The technique used is to
      compile the decompressor with -fPIC and modify it so the decompressor
      is fully relocatable.  For the main kernel the page tables are
      modified so the kernel remains at the same virtual address.  In
      addition a variable phys_base is kept that holds the physical address
      the kernel is loaded at.  __pa_symbol is modified to add that when
      we take the address of a kernel symbol.
      
      When loaded with a normal bootloader the decompressor will decompress
      the kernel to 2M and it will run there.  This both ensures the
      relocation code is always working, and makes it easier to use 2M
      pages for the kernel and the cpu.
      
      AK: changed to not make RELOCATABLE default in Kconfig
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      1ab60e0f
  13. 07 12月, 2006 1 次提交
    • E
      [PATCH] i386: Relocatable kernel support · 968de4f0
      Eric W. Biederman 提交于
      This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is
      selected it will be able to be loaded at any 4K aligned address below
      1G.  The technique used is to compile the decompressor with -fPIC and
      modify it so the decompressor is fully relocatable.  For the main
      kernel relocations are generated.  Resulting in a kernel that is relocatable
      with no runtime overhead and no need to modify the source code.
      
      A reserved 32bit word in the parameters has been assigned
      to serve as a stack so we figure out where are running.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      968de4f0