- 07 12月, 2006 40 次提交
-
-
由 Rusty Russell 提交于
It turns out that the most called ops, by several orders of magnitude, are the interrupt manipulation ops. These are obvious candidates for patching, so mark them up and create infrastructure for it. The method used is that the ops structure has a patch function, which is called for each place which needs to be patched: this returns a number of instructions (the rest are NOP-padded). Usually we can spare a register (%eax) for the binary patched code to use, but in a couple of critical places in entry.S we can't: we make the clobbers explicit at the call site, and manually clobber the allowed registers in debug mode as an extra check. And: Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT. And: AK: Fix warnings in x86-64 alternative.c build And: AK: Fix compilation with defconfig And: ^From: Andrew Morton <akpm@osdl.org> Some binutlises still like to emit references to __stop_parainstructions and __start_parainstructions. And: AK: Fix warnings about unused variables when PARAVIRT is disabled. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Rusty Russell 提交于
Create a paravirt.h header for all the critical operations which need to be replaced with hypervisor calls, and include that instead of defining native operations, when CONFIG_PARAVIRT. This patch does the dumbest possible replacement of paravirtualized instructions: calls through a "paravirt_ops" structure. Currently these are function implementations of native hardware: hypervisors will override the ops structure with their own variants. All the pv-ops functions are declared "fastcall" so that a specific register-based ABI is used, to make inlining assember easier. And: +From: Andy Whitcroft <apw@shadowen.org> The paravirt ops introduce a 'weak' attribute onto memory_setup(). Code ordering leads to the following warnings on x86: arch/i386/kernel/setup.c:651: warning: weak declaration of `memory_setup' after first use results in unspecified behavior Move memory_setup() to avoid this. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
-
由 Nicolas Kaiser 提交于
Fix double #includes in arch/i386 Signed-off-by: NNicolas Kaiser <nikai@nikai.net> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 David Rientjes 提交于
Substitutes allocate_pgdat virtual address lookup with pfn_to_kaddr macro. Signed-off-by: NDavid Rientjes <rientjes@cs.washington.edu> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Chuck Ebbert 提交于
IOPL is implicitly saved and restored on task switch, so explicit check is no longer needed. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Interrupt could happen between setting the IO-APIC entry and setting its interrupt data. Pointed out by Linus. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 bibo,mao 提交于
This patch moves e820 memory map print and memmap boot param parsing function from setup.c to e820.c, also adds limit_regions and print_memory_map declaration in header file. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 152 +++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 158 --------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 153 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 155 insertions(+), 152 deletions(-)
-
由 bibo,mao 提交于
This patch moves e820/efi memmap table walking function from setup.c to e820.c, also this patch adds extern declaration in header file. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 117 insertions(+), 118 deletions(-)
-
由 bibo,mao 提交于
Move more code from setup.c into e820.c Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 bibo,mao 提交于
This patch moves bios e820 map sanitize and copy function from setup.c to e820.c Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 252 +++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 240 -------------------------------------------- 2 files changed, 252 insertions(+), 240 deletions(-)
-
由 bibo,mao 提交于
This patch creates new file named e820.c to hanle standard io/mem resources, moving request_standard_resources function from setup.c to e820.c. Also this patch modifies Makfile to compile file e820.c. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Makefile | 2 arch/i386/kernel/Makefile | 2 arch/i386/kernel/e820.c | 289 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 276 ------------------------------------------- 3 files changed, 293 insertions(+), 274 deletions(-)
-
由 Andi Kleen 提交于
Makes the intention of the code cleaner to read and avoids a potential deadlock on mmap_sem. Also change the types of the arguments to not include __user because they're really not user addresses. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
CLFLUSH is a lot faster than WBINVD so try to use that. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Also report it in /proc/cpuinfo similar to x86-64. Needed for followon patch Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Joe Korty 提交于
The entry.S code at work_notifysig is surely wrong. It drops into unrelated code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n. [PATCH] Make vm86 support optional tree 9b5daef5280800a0006343a17f63072658d91a1d pushed to git Jan 8, 2006, and first appears in 2.6.16 The 'fix' here is to also compile out the vm86 test & branch when CONFIG_VM86=n. Signed-off-by: NJoe Korty <joe.korty@ccur.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Extend bzImage protocol to enable bootloaders to load a completely relocatable bzImage. Now protected mode component of kernel is also relocatable and a boot-loader can load the protected mode component at a differnt physical address than 1MB. (If kernel was built with CONFIG_RELOCATABLE) Kexec can make use of it to load this kernel at a different physical address to capture kernel crash dumps. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Vivek Goyal 提交于
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN. Hardcoding the kernel physical start value creates a problem in relocatable kernel context due to boot loader limitations. For ex, if somebody compiles a relocatable kernel to be run from address 4MB, but this kernel will run from location 1MB as grub loads the kernel at physical address 1MB. Kernel thinks that I am a relocatable kernel and I should run from the address I have been loaded at. So somebody wanting to run kernel from 4MB alignment location (for improved performance regions) can't do that. o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make more sense in relocatable kernel context. At run time kernel will move itself to a physical addr location which meets user specified alignment restrictions. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
o Relocations generated w.r.t absolute symbols are not processed as by definition, absolute symbols are not to be relocated. Explicitly warn user about absolutions relocations present at compile time. o These relocations get introduced either due to linker optimizations or some programming oversights. o Also create a list of symbols which have been audited to be safe and don't emit warnings for these. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is selected it will be able to be loaded at any 4K aligned address below 1G. The technique used is to compile the decompressor with -fPIC and modify it so the decompressor is fully relocatable. For the main kernel relocations are generated. Resulting in a kernel that is relocatable with no runtime overhead and no need to modify the source code. A reserved 32bit word in the parameters has been assigned to serve as a stack so we figure out where are running. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but it triggers a full kernel rebuild for the silliest of reasons. This modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h which prevents the full rebuild problem, which makes the code much more maintainer and hopefully user friendly. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Currently when we are reserving the memory the kernel text resides in we start at __PHYSICAL_START which happens to be correct but not very obvious. In addition when we start relocating the kernel __PHYSICAL_START is the wrong value, as it is an absolute symbol that does not get relocated. By starting the reservation at __pa_symbol(_text) the code is clearer and will be correct when relocated. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Ld knows about 2 kinds of symbols, absolute and section relative. Section relative symbols symbols change value when a section is moved and absolute symbols do not. Currently in the linker script we have several labels marking the beginning and ending of sections that are outside of sections, making them absolute symbols. Having a mixture of absolute and section relative symbols refereing to the same data is currently harmless but it is confusing. This must be done carefully as newer revs of ld do not place symbols that appear in sections without data and instead ld makes those symbols global :( My ultimate goal is to build a relocatable kernel. The safest and least intrusive technique is to generate relocation entries so the kernel can be relocated at load time. The only penalty would be an increase in the size of the kernel binary. The problem is that if absolute and relocatable symbols are not properly specified absolute symbols will be relocated or section relative symbols won't be, which is fatal. The practical motivation is that when generating kernels that will run from a reserved area for analyzing what caused a kernel panic, it is simpler if you don't need to hard code the physical memory location they will run at, especially for the distributions. [AK: and merged:] o Also put a message so that in future people can be aware of it and avoid introducing absolute symbols. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
On modern systems RAM errors don't cause NMIs, but it's usually caused by PCI SERR. Mention PCI instead of RAM in the printk. Reported by r_hayashi@ctc-g.co.jp (Ryutaro Hayashi) Cc: r_hayashi@ctc-g.co.jp Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Alan Cox 提交于
Resending as I believe the discussion about them established they were correct. Signed-off-by: NAlan Cox <alan@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andi Kleen 提交于
Currently the idle loop has two nested loops -- one high level in cpu_idle and in some low level idle functions another one. Looping in the low level idle functions breaks the idle notifiers because interrupts waking up sleep states need to execute exit_idle() which is only in cpu_idle(). So don't do that, only loop in cpu_idle(). This only removes code. In some cases e.g. poll_idle the idle loop is a little longer now because cpu_idle checks more things. I hope that isn't a problem ACPI idle doesn't change behaviour because it never looped anyways. Cc: len.brown@intel.com Cc: eranian@hpl.hp.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jeremy Fitzhardinge 提交于
Use the pcurrent field in the PDA to implement the "current" macro. This ends up compiling down to a single instruction to get the current task. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
Use the cpu_number in the PDA to implement raw_smp_processor_id. This is a little simpler than using thread_info, though the cpu field in thread_info cannot be removed since it is used for things other than getting the current CPU in common code. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
sys_vm86 uses a struct kernel_vm86_regs, which is identical to pt_regs, but adds an extra space for all the segment registers. Previously this structure was completely independent, so changes in pt_regs had to be reflected in kernel_vm86_regs. This changes just embeds pt_regs in kernel_vm86_regs, and makes the appropriate changes to vm86.c to deal with the new naming. Also, since %gs is dealt with differently in the kernel, this change adjusts vm86.c to reflect this. While making these changes, I also cleaned up some frankly bizarre code which was added when auditing was added to sys_vm86. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jason Baron <jbaron@redhat.com> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
There are a few places where the change in struct pt_regs and the use of %gs affect the userspace ABI. These are primarily debugging interfaces where thread state can be inspected or extracted. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
This patch is the meat of the PDA change. This patch makes several related changes: 1: Most significantly, %gs is now used in the kernel. This means that on entry, the old value of %gs is saved away, and it is reloaded with __KERNEL_PDA. 2: entry.S constructs the stack in the shape of struct pt_regs, and this is passed around the kernel so that the process's saved register state can be accessed. Unfortunately struct pt_regs doesn't currently have space for %gs (or %fs). This patch extends pt_regs to add space for gs (no space is allocated for %fs, since it won't be used, and it would just complicate the code in entry.S to work around the space). 3: Because %gs is now saved on the stack like %ds, %es and the integer registers, there are a number of places where it no longer needs to be handled specially; namely context switch, and saving/restoring the register state in a signal context. 4: And since kernel threads run in kernel space and call normal kernel code, they need to be created with their %gs == __KERNEL_PDA. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
When a CPU is brought up, a PDA and GDT are allocated for it. The GDT's __KERNEL_PDA entry is pointed to the allocated PDA memory, so that all references using this segment descriptor will refer to the PDA. This patch rearranges CPU initialization a bit, so that the GDT/PDA are set up as early as possible in cpu_init(). Also for secondary CPUs, GDT+PDA are preallocated and initialized so all the secondary CPU needs to do is set up the ldt and load %gs. This will be important once smp_processor_id() and current use the PDA. In all cases, the PDA is set up in head.S, before a CPU starts running C code, so the PDA is always available. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: James Bottomley <James.Bottomley@SteelEye.com> Cc: Matt Tolentino <matthew.e.tolentino@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
This patch has the basic definitions of struct i386_pda, and the segment selector in the GDT. asm-i386/pda.h is more or less a direct copy of asm-x86_64/pda.h. The most interesting difference is the use of _proxy_pda, which is used to give gcc a model for the actual memory operations on the real pda structure. No actual reference is ever made to _proxy_pda, so it is never defined. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
Use asm-offsets for the offsets of registers into the pt_regs struct, rather than having hard-coded constants I left the constants in the comments of entry.S because they're useful for reference; the code in entry.S is very dependent on the layout of pt_regs, even when using asm-offsets. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Keith Owens <kaos@ocs.com.au> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Amol Lad 提交于
ioremap must be balanced by an iounmap and failing to do so can result in a memory leak. Tested (compilation only): - using allmodconfig - making sure the files are compiling without any warning/error due to new changes Signed-off-by: NAmol Lad <amol@verismonetworks.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Amol Lad 提交于
Signed-off-by: NAmol Lad <amol@verismonetworks.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Chuck Ebbert 提交于
i386 port of the sLeAZY-fpu feature. Chuck reports that this gives him a +/- 0.4% improvement on his simple benchmark x86_64 description follows: Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stas Sergeev 提交于
Clean up the espfix code: - Introduced PER_CPU() macro to be used from asm - Introduced GET_DESC_BASE() macro to be used from asm - Rewrote the fixup code in asm, as calling a C code with the altered %ss appeared to be unsafe - No longer altering the stack from a .fixup section - 16bit per-cpu stack is no longer used, instead the stack segment base is patched the way so that the high word of the kernel and user %esp are the same. - Added the limit-patching for the espfix segment. (Chuck Ebbert) [jeremy@goop.org: use the x86 scaling addressing mode rather than shifting] Signed-off-by: NStas Sergeev <stsp@aknet.ru> Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NZachary Amsden <zach@vmware.com> Acked-by: NChuck Ebbert <76306.1226@compuserve.com> Acked-by: NJan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andrew Morton 提交于
When a spinlock lockup occurs, arrange for the NMI code to emit an all-cpu backtrace, so we get to see which CPU is holding the lock, and where. Cc: Andi Kleen <ak@muc.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-