- 05 3月, 2014 3 次提交
-
-
由 Matt Fleming 提交于
Some EFI firmware makes use of the FPU during boottime services and clearing X86_CR4_OSFXSR by overwriting %cr4 causes the firmware to crash. Add the PAE bit explicitly instead of trashing the existing contents, leaving the rest of the bits as the firmware set them. Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
The EFI handover code only works if the "bitness" of the firmware and the kernel match, i.e. 64-bit firmware and 64-bit kernel - it is not possible to mix the two. This goes against the tradition that a 32-bit kernel can be loaded on a 64-bit BIOS platform without having to do anything special in the boot loader. Linux distributions, for one thing, regularly run only 32-bit kernels on their live media. Despite having only one 'handover_offset' field in the kernel header, EFI boot loaders use two separate entry points to enter the kernel based on the architecture the boot loader was compiled for, (1) 32-bit loader: handover_offset (2) 64-bit loader: handover_offset + 512 Since we already have two entry points, we can leverage them to infer the bitness of the firmware we're running on, without requiring any boot loader modifications, by making (1) and (2) valid entry points for both CONFIG_X86_32 and CONFIG_X86_64 kernels. To be clear, a 32-bit boot loader will always use (1) and a 64-bit boot loader will always use (2). It's just that, if a single kernel image supports (1) and (2) that image can be used with both 32-bit and 64-bit boot loaders, and hence both 32-bit and 64-bit EFI. (1) and (2) must be 512 bytes apart at all times, but that is already part of the boot ABI and we could never change that delta without breaking existing boot loaders anyhow. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
It's not possible to dereference the EFI System table directly when booting a 64-bit kernel on a 32-bit EFI firmware because the size of pointers don't match. In preparation for supporting the above use case, build a list of function pointers on boot so that callers don't have to worry about converting pointer sizes through multiple levels of indirection. Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 13 10月, 2013 1 次提交
-
-
由 Kees Cook 提交于
This allows decompress_kernel to return a new location for the kernel to be relocated to. Additionally, enforces CONFIG_PHYSICAL_START as the minimum relocation position when building with CONFIG_RELOCATABLE. With CONFIG_RANDOMIZE_BASE set, the choose_kernel_location routine will select a new location to decompress the kernel, though here it is presently a no-op. The kernel command line option "nokaslr" is introduced to bypass these routines. Signed-off-by: NKees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/1381450698-28710-3-git-send-email-keescook@chromium.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 08 8月, 2013 1 次提交
-
-
由 Kees Cook 提交于
Moves the relocation handling into C, after decompression. This requires that the decompressed size is passed to the decompression routine as well so that relocations can be found. Only kernels that need relocation support will use the code (currently just x86_32), but this is laying the ground work for 64-bit using it in support of KASLR. Based on work by Neill Clift and Michael Davidson. Signed-off-by: NKees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20130708161517.GA4832@www.outflux.netAcked-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 28 5月, 2013 1 次提交
-
-
由 Zhang Yanfei 提交于
arch/x86/boot/compressed/head_64.S includes <asm/pgtable_types.h> and <asm/page_types.h> but it doesn't look like it needs them. So remove them. Signed-off-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Link: http://lkml.kernel.org/r/5191FAE2.4020403@cn.fujitsu.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 3月, 2013 1 次提交
-
-
由 Lans Zhang 提交于
In startup_32, the running code still uses the initial GDT located in setup. Thus, __BOOT_DS is preferred. Currently __KERNEL_DS is lucky to equal to __BOOT_DS, but this is not always a safe way. Signed-off-by: NLans Zhang <lans.zhang2008@gmail.com> Link: http://lkml.kernel.org/r/51300267.6000008@gmail.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 30 1月, 2013 3 次提交
-
-
由 Yinghai Lu 提交于
Now 64bit entry is fixed on 0x200, can not be changed anymore. Update the comments to reflect that. Also put info about it in boot.txt -v2: fix some grammar error Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1359058816-7615-27-git-send-email-yinghai@kernel.org Cc: Rob Landley <rob@landley.net> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
commit 08da5a2c x86_64: Early segment setup for VT sets up LDT and TR into a valid state in order to speed up boot decompression under VT. Those code are put in code64, and it is using GDT that is only loaded from code32 path. That breaks booting with 64bit bootloader that does not go through code32 path and jump to startup_64 directly, and it has different GDT. Move those lines into code32 after their GDT is loaded. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1359058816-7615-21-git-send-email-yinghai@kernel.org Cc: Zachary Amsden <zamsden@gmail.com> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
We need to move some code to 32bit section in following patch: x86, boot: Move lldt/ltr out of 64bit code section but that will push startup_64 down from 0x200. According to hpa, we can not change startup_64 position and that is an ABI. We could move function verify_cpu and no_longmode down, because verify_cpu is used via function call and no_longmode will not return, then we don't need to add extra code for jumping back. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1359058816-7615-20-git-send-email-yinghai@kernel.org Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 28 1月, 2013 1 次提交
-
-
由 David Woodhouse 提交于
We have historically hard-coded entry points in head.S just so it's easy to build the executable/bzImage headers with references to them. Unfortunately, this leads to boot loaders abusing these "known" addresses even when they are *explicitly* told that they "should look at the ELF header to find this address, as it may change in the future". And even when the address in question *has* actually been changed in the past, without fanfare or thought to compatibility. Thus we have bootloaders doing stunningly broken things like jumping to offset 0x200 in the kernel startup code in 64-bit mode, *hoping* that startup_64 is still there (it has moved at least once before). And hoping that it's actually a 64-bit kernel despite the fact that we don't give them any indication of that fact. This patch should hopefully remove the temptation to abuse internal addresses in future, where sternly worded comments have not sufficed. Instead of having hard-coded addresses and saying "please don't abuse these", we actually pull the addresses out of the ELF payload into zoffset.h, and make build.c shove them back into the right places in the bzImage header. Rather than including zoffset.h into build.c and thus having to rebuild the tool for every kernel build, we parse it instead. The parsing code is small and simple. This patch doesn't actually move any of the interesting entry points, so any offending bootloader will still continue to "work" after this patch is applied. For some version of "work" which includes jumping into the compressed payload and crashing, if the bzImage it's given is a 32-bit kernel. No change there then. [ hpa: some of the issues in the description are addressed or retconned by the 2.12 boot protocol. This patch has been edited to only remove fixed addresses that were *not* thus retconned. ] Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Link: http://lkml.kernel.org/r/1358513837.2397.247.camel@shinybook.infradead.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Matt Fleming <matt.fleming@intel.com>
-
- 21 7月, 2012 1 次提交
-
-
由 Matt Fleming 提交于
As things currently stand, traditional EFI boot loaders and the EFI boot stub are carrying essentially the same initialisation code required to setup an EFI machine for booting a kernel. There's really no need to have this code in two places and the hope is that, with this new protocol, initialisation and booting of the kernel can be left solely to the kernel's EFI boot stub. The responsibilities of the boot loader then become, o Loading the kernel image from boot media File system code still needs to be carried by boot loaders for the scenario where the kernel and initrd files reside on a file system that the EFI firmware doesn't natively understand, such as ext4, etc. o Providing a user interface Boot loaders still need to display any menus/interfaces, for example to allow the user to select from a list of kernels. Bump the boot protocol number because we added the 'handover_offset' field to indicate the location of the handover protocol entry point. Cc: H. Peter Anvin <hpa@zytor.com> Cc: Peter Jones <pjones@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Acked-and-Tested-by: NMatthew Garrett <mjg@redhat.com> Link: http://lkml.kernel.org/r/1342689828-16815-1-git-send-email-matt@console-pimps.orgSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 17 4月, 2012 1 次提交
-
-
由 Matt Fleming 提交于
The method used to work out whether we were booted by EFI firmware or via a boot loader is broken. Because efi_main() is always executed when booting from a boot loader we will dereference invalid pointers either on the stack (CONFIG_X86_32) or contained in %rdx (CONFIG_X86_64) when searching for an EFI System Table signature. Instead of dereferencing these invalid system table pointers, add a new entry point that is only used when booting from EFI firmware, when we know the pointer arguments will be valid. With this change legacy boot loaders will no longer execute efi_main(), but will instead skip EFI stub initialisation completely. [ hpa: Marking this for urgent/stable since it is a regression when the option is enabled; without the option the patch has no effect ] Signed-off-by: NMatt Fleming <matt.hfleming@intel.com> Link: http://lkml.kernel.org/r/1334584744.26997.14.camel@mfleming-mobl1.ger.corp.intel.comReported-by: NJordan Justen <jordan.l.justen@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> v3.3
-
- 13 12月, 2011 1 次提交
-
-
由 Matt Fleming 提交于
There is currently a large divide between kernel development and the development of EFI boot loaders. The idea behind this patch is to give the kernel developers full control over the EFI boot process. As H. Peter Anvin put it, "The 'kernel carries its own stub' approach been very successful in dealing with BIOS, and would make a lot of sense to me for EFI as well." This patch introduces an EFI boot stub that allows an x86 bzImage to be loaded and executed by EFI firmware. The bzImage appears to the firmware as an EFI application. Luckily there are enough free bits within the bzImage header so that it can masquerade as an EFI application, thereby coercing the EFI firmware into loading it and jumping to its entry point. The beauty of this masquerading approach is that both BIOS and EFI boot loaders can still load and run the same bzImage, thereby allowing a single kernel image to work in any boot environment. The EFI boot stub supports multiple initrds, but they must exist on the same partition as the bzImage. Command-line arguments for the kernel can be appended after the bzImage name when run from the EFI shell, e.g. Shell> bzImage console=ttyS0 root=/dev/sdb initrd=initrd.img v7: - Fix checkpatch warnings. v6: - Try to allocate initrd memory just below hdr->inird_addr_max. v5: - load_options_size is UTF-16, which needs dividing by 2 to convert to the corresponding ASCII size. v4: - Don't read more than image->load_options_size v3: - Fix following warnings when compiling CONFIG_EFI_STUB=n arch/x86/boot/tools/build.c: In function ‘main’: arch/x86/boot/tools/build.c:138:24: warning: unused variable ‘pe_header’ arch/x86/boot/tools/build.c:138:15: warning: unused variable ‘file_sz’ - As reported by Matthew Garrett, some Apple machines have GOPs that don't have hardware attached. We need to weed these out by searching for ones that handle the PCIIO protocol. - Don't allocate memory if no initrds are on cmdline - Don't trust image->load_options_size Maarten Lankhorst noted: - Don't strip first argument when booted from efibootmgr - Don't allocate too much memory for cmdline - Don't update cmdline_size, the kernel considers it read-only - Don't accept '\n' for initrd names v2: - File alignment was too large, was 8192 should be 512. Reported by Maarten Lankhorst on LKML. - Added UGA support for graphics - Use VIDEO_TYPE_EFI instead of hard-coded number. - Move linelength assignment until after we've assigned depth - Dynamically fill out AddressOfEntryPoint in tools/build.c - Don't use magic number for GDT/TSS stuff. Requested by Andi Kleen - The bzImage may need to be relocated as it may have been loaded at a high address address by the firmware. This was required to get my macbook booting because the firmware loaded it at 0x7cxxxxxx, which triggers this error in decompress_kernel(), if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff)) error("Destination address too large"); Cc: Mike Waychison <mikew@google.com> Cc: Matthew Garrett <mjg@redhat.com> Tested-by: NHenrik Rydberg <rydberg@euromail.se> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Link: http://lkml.kernel.org/r/1321383097.2657.9.camel@mfleming-mobl1.ger.corp.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 11 11月, 2010 1 次提交
-
-
由 Kees Cook 提交于
The code is 32bit already, and can be used in 32bit routines. Signed-off-by: NKees Cook <kees.cook@canonical.com> LKML-Reference: <1289414154-7829-2-git-send-email-kees.cook@canonical.com> Acked-by: NPekka Enberg <penberg@kernel.org> Acked-by: NAlan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 03 8月, 2010 1 次提交
-
-
由 H. Peter Anvin 提交于
In order for global variables and functions to work in the decompressor, we need to fix up the GOT in assembly code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> LKML-Reference: <4C57382E.8050501@zytor.com>
-
- 24 10月, 2009 1 次提交
-
-
由 Alexander Potashev 提交于
A single 'movl' is shorter than the 'xorl'-'orl' pair. No change in behaviour. Signed-off-by: NAlexander Potashev <aspotashev@gmail.com> LKML-Reference: <1256341043-4928-1-git-send-email-aspotashev@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 9月, 2009 1 次提交
-
-
由 Tim Abbott 提交于
This has the consequence of changing the section name use for head code from ".text.head" to ".head.text". Linus suggested that we merge the ".text.head" section with ".text" (presumably while preserving the fact that the head code starts at 0). When I tried this it caused the kernel to not boot. Signed-off-by: NTim Abbott <tabbott@ksplice.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 12 5月, 2009 3 次提交
-
-
由 H. Peter Anvin 提交于
Make the kernel_alignment field adjustable; this allows us to set it to a large value (intended to be 16 MB to avoid ZONE_DMA contention, memory holes and other weirdness) while a smart bootloader can still force a loading at a lesser alignment if absolutely necessary. Also export pref_address (preferred loading address, corresponding to the link-time address) and init_size, the total amount of linear memory the kernel will require during initialization. [ Impact: allows better kernel placement, gives bootloader more info ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Remove a couple of lines of dead code from arch/x86/boot/compressed/head_*.S; all of these update registers that are dead in the current code. [ Impact: cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Use LOAD_PHYSICAL_ADDR instead of CONFIG_PHYSICAL_START in the 64-bit decompression code, for equivalence with the 32-bit code. [ Impact: cleanup, increases code similarity ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 09 5月, 2009 6 次提交
-
-
由 H. Peter Anvin 提交于
Determine the compressed code offset (from the kernel runtime address) at compile time. This allows some minor optimizations in arch/x86/boot/compressed/head_*.S, but more importantly it makes this value available to the build process, which will enable a future patch to export the necessary linear memory footprint into the bzImage header. [ Impact: cleanup, future patch enabling ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
In the pre-decompression code, use the appropriate largest possible rep movs and rep stos to move code and clear bss, respectively. For reverse copy, do note that the initial values are supposed to be the address of the first (highest) copy datum, not one byte beyond the end of the buffer. rep strings are not necessarily the fastest way to perform these operations on all current processors, but are likely to be in the future, and perhaps more importantly, we want to encourage the architecturally right thing to do here. This also fixes a couple of trivial inefficiencies on 64 bits. [ Impact: trivial performance enhancement, increase code similarity ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Set up the decompression stack as soon as we know where it needs to go. That way we have a full-service stack as soon as possible, rather than relying on the BP_scratch field. Note that the stack does need to be empty during bss zeroing (or else the stack needs to be moved out of the bss segment, which is also an option.) [ Impact: cleanup, minor paranoia ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Both on 32 and 64 bits, we copy all the way up to the end of bss, except that on 64 bits there is a hack to avoid copying on top of the page tables. There is no point in copying bss at all, especially since we are just about to zero it all anyway. To clean up and unify the handling, we now do: - copy from startup_32 to _bss. - zero from _bss to _ebss. - the _ebss symbol is aligned to an 8-byte boundary. - the page tables are moved to a separate section. Use _bss as the copy endpoint since _edata may be misaligned. [ Impact: cleanup, trivial performance improvement ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Clean up style issues in arch/x86/boot/compressed/head_64.S. This file had a lot fewer style issues than its 32-bit cousin, but the ones it has are worth fixing, especially since it makes the two files more similar. [ Impact: cleanup, no object code change ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Use the BP_scratch symbol from asm-offsets.h instead of hard-coding the location. [ Impact: cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 20 2月, 2009 1 次提交
-
-
由 Cyrill Gorcunov 提交于
Impact: clenaup Linker script will put startup_32 at predefined address so using ENTRY will not bloat the code size. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 2月, 2009 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
In general, the only definitions that assembly files can use are in _types.S headers (where available), so convert them. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 25 5月, 2008 1 次提交
-
-
由 Cyrill Gorcunov 提交于
We should better use already defined flags from processor-flags.h instead of defining own ones [>>> object code check >>>] original md5sum: 129f24be6df396fb7d8bf998c01fc716 arch/x86/boot/compressed/head_64.o text data bss dec hex filename 705 48 45056 45809 b2f1 arch/x86/boot/compressed/head_64.o patched md5sum: 129f24be6df396fb7d8bf998c01fc716 arch/x86/boot/compressed/head_64-new.o text data bss dec hex filename 705 48 45056 45809 b2f1 arch/x86/boot/compressed/head_64-new.o [<<< object code check <<<] Signed-off-by: NCyrill Gorcunov <gorcunov@gmail.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 4月, 2008 2 次提交
-
-
由 Yinghai Lu 提交于
cleanup: change the _end in compressed vmlinux_64.lds. also change _heap to _ebss that is not needed. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alexander van Heukelum 提交于
The kernel decompressor wrapper uses memory located beyond the end of the image. This might lead to hard to debug problems, but even if it can be proven to be safe, it is at the very least unclean. I don't see any advantages either, unless you count it not being zeroed out as an advantage. This patch moves the boot-heap area to the bss segment. Signed-off-by: NAlexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 2月, 2008 1 次提交
-
-
由 Andi Kleen 提交于
Fix up all users. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 28 10月, 2007 1 次提交
-
-
由 Eric W. Biederman 提交于
The kernel only ever supports 1 version of the boot protocol so there is no need to check the boot protocol revision to see if a feature is supported. Both x86 and x86_64 support the same boot protocol so we need to implement the KEEP_SEGMENTS on x86_64 as well. It isn't just paravirt bootloaders that could use this functionality. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Andi Kleen <ak@suse.de> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 10月, 2007 4 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 8月, 2007 1 次提交
-
-
由 Zachary Amsden 提交于
VT is very picky about when it can enter execution. Get all segments setup and get LDT and TR into valid state to allow VT execution under VMware and KVM (untested). This makes the boot decompression run under VT, which makes it several orders of magnitude faster on 64-bit Intel hardware. Before, I was seeing times up to a minute or more to decompress a 1.3MB kernel on a very fast box. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 7月, 2007 1 次提交
-
-
由 H. Peter Anvin 提交于
The relocatable kernel code needs a scratch field for the decompressor to determine its own location. It was using a location inside struct screen_info; reserve a free location and document it as scratch instead. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-