- 07 12月, 2006 40 次提交
-
-
由 Andi Kleen 提交于
CLFLUSH is a lot faster than WBINVD so avoid the later if at all possible. Always pass the complete list of pages to other CPUs to cut down the number of IPIs. Minor other cleanup and sync with i386 version. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Joe Korty 提交于
The entry.S code at work_notifysig is surely wrong. It drops into unrelated code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n. [PATCH] Make vm86 support optional tree 9b5daef5280800a0006343a17f63072658d91a1d pushed to git Jan 8, 2006, and first appears in 2.6.16 The 'fix' here is to also compile out the vm86 test & branch when CONFIG_VM86=n. Signed-off-by: NJoe Korty <joe.korty@ccur.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Avi Kivity 提交于
Code that wants to use struct desc_struct cannot do so on i386 because desc.h contains other code that will only compile on x86_64. So extract the structure definitions into a asm-x86_64/desc_defs.h. Signed-off-by: NAvi Kivity <avi@qumranet.com> Signed-off-by: NAndi Kleen <ak@suse.de> include/asm-x86_64/desc.h | 53 ------------------------------- include/asm-x86_64/desc_defs.h | 69 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+), 52 deletions(-)
-
由 Vivek Goyal 提交于
Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Extend bzImage protocol to enable bootloaders to load a completely relocatable bzImage. Now protected mode component of kernel is also relocatable and a boot-loader can load the protected mode component at a differnt physical address than 1MB. (If kernel was built with CONFIG_RELOCATABLE) Kexec can make use of it to load this kernel at a different physical address to capture kernel crash dumps. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Vivek Goyal 提交于
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN. Hardcoding the kernel physical start value creates a problem in relocatable kernel context due to boot loader limitations. For ex, if somebody compiles a relocatable kernel to be run from address 4MB, but this kernel will run from location 1MB as grub loads the kernel at physical address 1MB. Kernel thinks that I am a relocatable kernel and I should run from the address I have been loaded at. So somebody wanting to run kernel from 4MB alignment location (for improved performance regions) can't do that. o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make more sense in relocatable kernel context. At run time kernel will move itself to a physical addr location which meets user specified alignment restrictions. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
o Relocations generated w.r.t absolute symbols are not processed as by definition, absolute symbols are not to be relocated. Explicitly warn user about absolutions relocations present at compile time. o These relocations get introduced either due to linker optimizations or some programming oversights. o Also create a list of symbols which have been audited to be safe and don't emit warnings for these. Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
This patch modifies the i386 kernel so that if CONFIG_RELOCATABLE is selected it will be able to be loaded at any 4K aligned address below 1G. The technique used is to compile the decompressor with -fPIC and modify it so the decompressor is fully relocatable. For the main kernel relocations are generated. Resulting in a kernel that is relocatable with no runtime overhead and no need to modify the source code. A reserved 32bit word in the parameters has been assigned to serve as a stack so we figure out where are running. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Print the addresses of non-absolute symbols relative to _text so that ld will generate relocations. Allowing a relocatable kernel to relocate them. We can't actually use the symbol names because kallsyms includes static symbols that are not exported from their object files. Add the _text symbol definitions to the architectures which don't define it otherwise linker will fail. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but it triggers a full kernel rebuild for the silliest of reasons. This modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h which prevents the full rebuild problem, which makes the code much more maintainer and hopefully user friendly. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Currently when we are reserving the memory the kernel text resides in we start at __PHYSICAL_START which happens to be correct but not very obvious. In addition when we start relocating the kernel __PHYSICAL_START is the wrong value, as it is an absolute symbol that does not get relocated. By starting the reservation at __pa_symbol(_text) the code is clearer and will be correct when relocated. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
On x86_64 we have to be careful with calculating the physical address of kernel symbols. Both because of compiler odditities and because the symbols live in a different range of the virtual address space. Having a defintition of __pa_symbol that works on both x86_64 and i386 simplifies writing code that works for both x86_64 and i386 that has these kinds of dependencies. So this patch adds the trivial i386 __pa_symbol definition. Added assembly magic similar to RELOC_HIDE as suggested by Andi Kleen. Just picked it up from x86_64. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Vivek Goyal 提交于
Ld knows about 2 kinds of symbols, absolute and section relative. Section relative symbols symbols change value when a section is moved and absolute symbols do not. Currently in the linker script we have several labels marking the beginning and ending of sections that are outside of sections, making them absolute symbols. Having a mixture of absolute and section relative symbols refereing to the same data is currently harmless but it is confusing. This must be done carefully as newer revs of ld do not place symbols that appear in sections without data and instead ld makes those symbols global :( My ultimate goal is to build a relocatable kernel. The safest and least intrusive technique is to generate relocation entries so the kernel can be relocated at load time. The only penalty would be an increase in the size of the kernel binary. The problem is that if absolute and relocatable symbols are not properly specified absolute symbols will be relocated or section relative symbols won't be, which is fatal. The practical motivation is that when generating kernels that will run from a reserved area for analyzing what caused a kernel panic, it is simpler if you don't need to hard code the physical memory location they will run at, especially for the distributions. [AK: and merged:] o Also put a message so that in future people can be aware of it and avoid introducing absolute symbols. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
This network ioctl wasn't handled before. Reported by Alexandra.Kossovsky@oktetlabs.ru (Alexandra Kossovsky) Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
On modern systems RAM errors don't cause NMIs, but it's usually caused by PCI SERR. Mention PCI instead of RAM in the printk. Reported by r_hayashi@ctc-g.co.jp (Ryutaro Hayashi) Cc: r_hayashi@ctc-g.co.jp Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Alan Cox 提交于
Resending as I believe the discussion about them established they were correct. Signed-off-by: NAlan Cox <alan@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andi Kleen 提交于
Currently the idle loop has two nested loops -- one high level in cpu_idle and in some low level idle functions another one. Looping in the low level idle functions breaks the idle notifiers because interrupts waking up sleep states need to execute exit_idle() which is only in cpu_idle(). So don't do that, only loop in cpu_idle(). This only removes code. In some cases e.g. poll_idle the idle loop is a little longer now because cpu_idle checks more things. I hope that isn't a problem ACPI idle doesn't change behaviour because it never looped anyways. Cc: len.brown@intel.com Cc: eranian@hpl.hp.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
This patch fixes the math emulator, which had not been adjusted to match the changed struct pt_regs. AK: extracted from larger patch by Jeremy. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jeremy Fitzhardinge 提交于
Use the pcurrent field in the PDA to implement the "current" macro. This ends up compiling down to a single instruction to get the current task. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
Use the cpu_number in the PDA to implement raw_smp_processor_id. This is a little simpler than using thread_info, though the cpu field in thread_info cannot be removed since it is used for things other than getting the current CPU in common code. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
sys_vm86 uses a struct kernel_vm86_regs, which is identical to pt_regs, but adds an extra space for all the segment registers. Previously this structure was completely independent, so changes in pt_regs had to be reflected in kernel_vm86_regs. This changes just embeds pt_regs in kernel_vm86_regs, and makes the appropriate changes to vm86.c to deal with the new naming. Also, since %gs is dealt with differently in the kernel, this change adjusts vm86.c to reflect this. While making these changes, I also cleaned up some frankly bizarre code which was added when auditing was added to sys_vm86. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jason Baron <jbaron@redhat.com> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
There are a few places where the change in struct pt_regs and the use of %gs affect the userspace ABI. These are primarily debugging interfaces where thread state can be inspected or extracted. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
This patch is the meat of the PDA change. This patch makes several related changes: 1: Most significantly, %gs is now used in the kernel. This means that on entry, the old value of %gs is saved away, and it is reloaded with __KERNEL_PDA. 2: entry.S constructs the stack in the shape of struct pt_regs, and this is passed around the kernel so that the process's saved register state can be accessed. Unfortunately struct pt_regs doesn't currently have space for %gs (or %fs). This patch extends pt_regs to add space for gs (no space is allocated for %fs, since it won't be used, and it would just complicate the code in entry.S to work around the space). 3: Because %gs is now saved on the stack like %ds, %es and the integer registers, there are a number of places where it no longer needs to be handled specially; namely context switch, and saving/restoring the register state in a signal context. 4: And since kernel threads run in kernel space and call normal kernel code, they need to be created with their %gs == __KERNEL_PDA. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
When a CPU is brought up, a PDA and GDT are allocated for it. The GDT's __KERNEL_PDA entry is pointed to the allocated PDA memory, so that all references using this segment descriptor will refer to the PDA. This patch rearranges CPU initialization a bit, so that the GDT/PDA are set up as early as possible in cpu_init(). Also for secondary CPUs, GDT+PDA are preallocated and initialized so all the secondary CPU needs to do is set up the ldt and load %gs. This will be important once smp_processor_id() and current use the PDA. In all cases, the PDA is set up in head.S, before a CPU starts running C code, so the PDA is always available. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: James Bottomley <James.Bottomley@SteelEye.com> Cc: Matt Tolentino <matthew.e.tolentino@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
This patch has the basic definitions of struct i386_pda, and the segment selector in the GDT. asm-i386/pda.h is more or less a direct copy of asm-x86_64/pda.h. The most interesting difference is the use of _proxy_pda, which is used to give gcc a model for the actual memory operations on the real pda structure. No actual reference is ever made to _proxy_pda, so it is never defined. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jeremy Fitzhardinge 提交于
Use asm-offsets for the offsets of registers into the pt_regs struct, rather than having hard-coded constants I left the constants in the comments of entry.S because they're useful for reference; the code in entry.S is very dependent on the layout of pt_regs, even when using asm-offsets. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Keith Owens <kaos@ocs.com.au> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Jan Beulich 提交于
This patch: - makes ret_from_sys_call no longer global (all external users were previously switched to use int_ret_from_sys_call) - adjusts placement of a CFI_{REMEMBER,RESTORE}_STATE pair to better fit logic flow - eliminates an unnecessary pair of CFI_{REMEMBER,RESTORE}_STATE - glues together function- and unwinder-wise the previously separate system_call and int_ret_from_sys_call function fragments Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andrew Morton 提交于
Fix BUG: using smp_processor_id() in preemptible [00000001] code: in backtracer on preemptible debug kernels. Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stephane Eranian 提交于
- add Intel Precise-Event Based sampling (PEBS) related MSR - add Intel Data Save (DS) Area related MSR - add Intel Core microarchitecure performance counter MSRs Signed-off-by: Nstephane eranian <eranian@hpl.hp.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stephane Eranian 提交于
Add o the x86-64 tree a bunch of MSRs related to performance monitoring for the processors based on Intel Core microarchitecture. It also adds some architectural MSRs for PEBS. A similar patch for i386 will follow. changelog: - add Intel Precise-Event Based sampling (PEBS) related MSR - add Intel Data Save (DS) Area related MSR - add Intel Core microarchitecure performance counter MSRs Signed-off-by: Nstephane eranian <eranian@hpl.hp.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Amol Lad 提交于
ioremap must be balanced by an iounmap and failing to do so can result in a memory leak. Tested (compilation only): - using allmodconfig - making sure the files are compiling without any warning/error due to new changes Signed-off-by: NAmol Lad <amol@verismonetworks.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Amol Lad 提交于
Signed-off-by: NAmol Lad <amol@verismonetworks.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Aaron Durbin 提交于
Insert the Local APIC and IO APIC(s) into the resource tree. It allows the APIC resources to be visible within /proc/iomem. The patch also takes into account IO APIC(s) mapped in the PCI space by deferring the insertion until after PCI has allocated its necessary resources. Signed-off-by: NAaron Durbin <adurbin@google.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Chuck Ebbert 提交于
i386 port of the sLeAZY-fpu feature. Chuck reports that this gives him a +/- 0.4% improvement on his simple benchmark x86_64 description follows: Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Stas Sergeev 提交于
Clean up the espfix code: - Introduced PER_CPU() macro to be used from asm - Introduced GET_DESC_BASE() macro to be used from asm - Rewrote the fixup code in asm, as calling a C code with the altered %ss appeared to be unsafe - No longer altering the stack from a .fixup section - 16bit per-cpu stack is no longer used, instead the stack segment base is patched the way so that the high word of the kernel and user %esp are the same. - Added the limit-patching for the espfix segment. (Chuck Ebbert) [jeremy@goop.org: use the x86 scaling addressing mode rather than shifting] Signed-off-by: NStas Sergeev <stsp@aknet.ru> Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NZachary Amsden <zach@vmware.com> Acked-by: NChuck Ebbert <76306.1226@compuserve.com> Acked-by: NJan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andrew Morton 提交于
When a spinlock lockup occurs, arrange for the NMI code to emit an all-cpu backtrace, so we get to see which CPU is holding the lock, and where. Cc: Andi Kleen <ak@muc.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jeremy Fitzhardinge 提交于
This patch removes the default_ldt[] array, as it has been unused since iBCS stopped being supported. This means it is now possible to actually set an empty LDT segment. In order to deal with this, the set_ldt_desc/load_LDT pair has been replaced with a single set_ldt() operation which is responsible for both setting up the LDT descriptor in the GDT, and reloading the LDT register. If there are no LDT entries, the LDT register is loaded with a NULL descriptor. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Acked-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Alexey Dobriyan 提交于
Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-