- 21 5月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
Correct the calculation of ZO_INIT_SIZE (the amount of memory we need during decompression). One symbol (ZO_startup_32) was missing from zoffset.h, and another (ZO_z_extract_offset) was misspelled. [ Impact: build fix ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 13 5月, 2009 2 次提交
-
-
由 H. Peter Anvin 提交于
Handle the misconfiguration where CONFIG_PHYSICAL_START is incompatible with CONFIG_PHYSICAL_ALIGN. This is a configuration error, but one which arises easily since Kconfig doesn't have the smarts to express the true relationship between these two variables. Hence, align __PHYSICAL_START the same way we align LOAD_PHYSICAL_ADDR in <asm/boot.h>. For non-relocatable kernels, this would cause the boot to fail. [ Impact: fix boot failures for non-relocatable kernels ] Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
arch/x86/boot/compressed/misc.c contains several sanity checks on the output address. Correct constraints that are no longer correct: - the alignment test should be MIN_KERNEL_ALIGN on both 32 and 64 bits. - the 64 bit maximum address was set to 2^40, which was the limit of one specific x86-64 implementation. Change the test to 2^46, the current Linux limit, and at least try to test the end rather than the beginning. - for non-relocatable kernels, test against LOAD_PHYSICAL_ADDR on both 32 and 64 bits. [ Impact: fix potential boot failure due to invalid tests ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 12 5月, 2009 10 次提交
-
-
由 H. Peter Anvin 提交于
A long ago, in days of yore, it all began with a god named Thor. There were vikings and boats and some plans for a Linux kernel header. Unfortunately, a single 8-bit field was used for bootloader type and version. This has generally worked without *too* much pain, but we're getting close to flat running out of ID fields. Add extension fields for both type and version. The type will be extended if it the old field is 0xE; the version is a simple MSB extension. Keep /proc/sys/kernel/bootloader_type containing (type << 4) + (ver & 0xf) for backwards compatiblity, but also add /proc/sys/kernel/bootloader_version which contains the full version number. [ Impact: new feature to support more bootloaders ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Update CONFIG_RELOCATABLE, CONFIG_PHYSICAL_START and CONFIG_PHYSICAL_ALIGN to reflect the current defaults. [ Impact: make defconfig match Kconfig defaults ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Update defconfigs to reflect current configuration files. No other changes. [ Impact: updates defconfigs to match what "make defconfig" generates ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Remove the EXPERIMENTAL tag from CONFIG_RELOCATABLE and make it the default. Relocatable kernels have been used for a while now, and should now have identical semantics to non-relocatable kernels when loaded by a non-relocating bootloader. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Default CONFIG_PHYSICAL_START and CONFIG_PHYSICAL_ALIGN each to 16 MB, so that both non-relocatable and relocatable kernels are loaded at 16 MB by a non-relocating bootloader. This is somewhat hacky, but it appears to be the only way to do this that does not break some some set of existing bootloaders. We want to avoid the bottom 16 MB because of large page breakup, memory holes, and ZONE_DMA. Embedded systems may need to reduce this, or update their bootloaders to be aware of the new min_alignment field. [ Impact: performance improvement, avoids problems on some systems ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Document the new bzImage fields for kernel memory placement. [ Impact: adds documentation ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Make the kernel_alignment field adjustable; this allows us to set it to a large value (intended to be 16 MB to avoid ZONE_DMA contention, memory holes and other weirdness) while a smart bootloader can still force a loading at a lesser alignment if absolutely necessary. Also export pref_address (preferred loading address, corresponding to the link-time address) and init_size, the total amount of linear memory the kernel will require during initialization. [ Impact: allows better kernel placement, gives bootloader more info ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Remove a couple of lines of dead code from arch/x86/boot/compressed/head_*.S; all of these update registers that are dead in the current code. [ Impact: cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Use LOAD_PHYSICAL_ADDR instead of CONFIG_PHYSICAL_START in the 64-bit decompression code, for equivalence with the 32-bit code. [ Impact: cleanup, increases code similarity ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Make symbols from the main vmlinux, as opposed to just compressed/vmlinux, available to header.S. Also, export a few additional symbols. This will be used in a subsequent patch to export the total memory footprint of the kernel. [ Impact: enable future enhancement ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 09 5月, 2009 13 次提交
-
-
由 H. Peter Anvin 提交于
Determine the compressed code offset (from the kernel runtime address) at compile time. This allows some minor optimizations in arch/x86/boot/compressed/head_*.S, but more importantly it makes this value available to the build process, which will enable a future patch to export the necessary linear memory footprint into the bzImage header. [ Impact: cleanup, future patch enabling ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
In the pre-decompression code, use the appropriate largest possible rep movs and rep stos to move code and clear bss, respectively. For reverse copy, do note that the initial values are supposed to be the address of the first (highest) copy datum, not one byte beyond the end of the buffer. rep strings are not necessarily the fastest way to perform these operations on all current processors, but are likely to be in the future, and perhaps more importantly, we want to encourage the architecturally right thing to do here. This also fixes a couple of trivial inefficiencies on 64 bits. [ Impact: trivial performance enhancement, increase code similarity ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
The 64-bit code already clears EFLAGS as soon as it has a stack. This seems like a reasonable precaution, so do it on 32 bits as well. [ Impact: extra paranoia ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Set up the decompression stack as soon as we know where it needs to go. That way we have a full-service stack as soon as possible, rather than relying on the BP_scratch field. Note that the stack does need to be empty during bss zeroing (or else the stack needs to be moved out of the bss segment, which is also an option.) [ Impact: cleanup, minor paranoia ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Both on 32 and 64 bits, we copy all the way up to the end of bss, except that on 64 bits there is a hack to avoid copying on top of the page tables. There is no point in copying bss at all, especially since we are just about to zero it all anyway. To clean up and unify the handling, we now do: - copy from startup_32 to _bss. - zero from _bss to _ebss. - the _ebss symbol is aligned to an 8-byte boundary. - the page tables are moved to a separate section. Use _bss as the copy endpoint since _edata may be misaligned. [ Impact: cleanup, trivial performance improvement ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Clean up style issues in arch/x86/boot/compressed/head_64.S. This file had a lot fewer style issues than its 32-bit cousin, but the ones it has are worth fixing, especially since it makes the two files more similar. [ Impact: cleanup, no object code change ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Reformat arch/x86/boot/compressed/head_32.S to be closer to currently preferred kernel assembly style, that is: - opcode and operand separated by tab - operands separated by ", " - C-style comments This also makes it more similar to head_64.S. [ Impact: cleanup, no object code change ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Use the BP_scratch symbol from asm-offsets.h instead of hard-coding the location. [ Impact: cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
When generating the compression suffix in arch/x86/boot/compressed/Makefile, follow standard Kbuild conventions, that is: - Use a dash not underscore before y/m/n endings - Use := whenever possible. Requested-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Simplify the arch/x86/boot/compressed/Makefile, by using the new capability of specifying multiple inputs to a compressor, and the CONFIG_X86_NEED_RELOCS Kconfig symbol. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NSam Ravnborg <sam@ravnborg.org>
-
由 H. Peter Anvin 提交于
We only need to build relocations when we are building a 32-bit relocatable kernel. Rather than unnecessarily complicating the Makefiles, make an explicit Kbuild symbol for this. [ Impact: permits future cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Sam Ravnborg <sam@ravnborg.org>
-
由 H. Peter Anvin 提交于
Allow the compression commands in Kbuild (i.e. gzip, bzip2, lzma) to take multiple input files and emit the concatenated compressed output. This avoids an intermediate step when a kernel image is built from multiple components, such as the relocatable x86-32 kernel. Sam Ravnborg integrated the bin_size script into the Makefile. [ Impact: new build feature, not yet used ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NSam Ravnborg <sam@ravnborg.org>
-
由 H. Peter Anvin 提交于
Aligning the .bss section makes it trivial to use large operation sizes for moving the initialized sections and clearing the .bss. The alignment chosen (L1 cache) is somewhat arbitrary, but should be large enough to avoid all known performance traps and small enough to not cause troubles. [ Impact: trivial performance enhancement, future patch prep ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 30 4月, 2009 1 次提交
-
-
由 Sam Ravnborg 提交于
Jesper reported that he saw following build issue: > ld:arch/x86/boot/compressed/vmlinux.lds:9: syntax error > make[2]: *** [arch/x86/boot/compressed/vmlinux] Error 1 > make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2 > make: *** [bzImage] Error 2 CPP defines the symbol "i386" to "1". Undefine this to fix it. [ Impact: build fix with certain tool chains ] Reported-by: NJesper Dangaard Brouer <jdb@comx.dk> Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <alpine.LFD.2.00.0904260958190.3101@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 4月, 2009 13 次提交
-
-
由 Ingo Molnar 提交于
__init_begin/_end symbols should be inside sections as well, otherwise the relocatable kernel gets confused when freeing init sections in the wrong place. [ Impact: fix bootup crash ] Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20090429105056.GA28720@uranus.ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Acked-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-2-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
32 bit: - explicit page align .bss - move ALING() out of .brk output section - discard *(.eh_frame) 64 bit: - move ALIGN() out of .bss output section - move ALIGN() out of .brk output section - use a dedicated section to define _end [ Impact: unify and fix section alignments in linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-13-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
32 bit: - move __init_end outside the .bss output section It really did not belong in there [ Impact: 64-bit: cleanup, 32-bit: refactor linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-12-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
[ Impact: cleanup ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-11-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
32 bit: - increase alignment from 4 to 8 for .parainstructions - increase alignment from 4 to 8 for .altinstructions 64 bit: - move ALIGN() outside output section for .altinstructions None of the above should result in any functional change. [ Impact: refactor and unify linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-10-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
32-bit: - Move definition of __init_begin outside output_section because it covers more than one section - Move ALIGN() for end-of-section inside .smp_locks output section. Same effect but the intent is better documented that we need both start and end aligned. 64-bit: - Move ALIGN() outside output section in .init.setup - Deleted unused __smp_alt_* symbols None of the above should result in any functional change. [ Impact: refactor and unify linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-9-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
[ Impact: cleanup ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-8-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
For 64 bit the following functional changes are introduced: - .data.page_aligned has moved - .data.cacheline_aligned has moved - .data.read_mostly has moved - ALIGN() moved out of output section for .data.cacheline_aligned - ALIGN() moved out of output section for .data.page_aligned Notice that 32 bit and 64 bit has different location of _edata. .data_nosave is 32 bit only as 64 bit is special due to PERCPU. [ Impact: 32-bit: cleanup, 64-bit: use 32-bit linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-7-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
[ Impact: cleanup ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-6-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
32 bit x86 had a dedicated .text.head output section, whereas 64 bit had it all in a single output section. In the unified version the dedicated .text.head output section was kept to have full control over the head code. 32 bit: - Moved definition of _stext to the linker script. The definition is located _after_ .text.page_aligned as this is what 32 bit did before. The ALIGN(8) was introduced so we hit the exact same address (on the tested config) before and after the move. I assume that it is a bug that _stext did not cover the .text.page_aligned section - if this is true it can be fixed in a follow-up patch (and the ugly ALIGN() can be dropped). [ Impact: 64-bit: cleanup, 32-bit: use the 64-bit linker script ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-5-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
[ Impact: cleanup ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-4-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Sam Ravnborg 提交于
PHDRS are not equal for the two - so use ifdefs to cover up for that. On the assumption that they may become equal the ifdef is inside the PHDRS definiton. [ Impact: cleanup ] Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Tim Abbott <tabbott@MIT.EDU> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1240991249-27117-3-git-send-email-sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-