- 26 9月, 2006 40 次提交
-
-
由 Andi Kleen 提交于
They cannot be actually freed because the FACS table has a shared-with-the-BIOS lock. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Ian Campbell 提交于
This patch updates x86_64 linker script to pack any .note.* sections into a PT_NOTE segment in the output file. To do this, we tell ld that we need a PT_NOTE segment. This requires us to start explicitly mapping sections to segments, so we also need to explicitly create PT_LOAD segments for text and data, and map the sections to them appropriately. Fortunately, each section will default to its previous section's segment, so it doesn't take many changes to vmlinux.lds.S. The corresponding change is already made for i386 in -mm and I'd like this patch to join it. The section to segment mappings do change as do the segment flags so some time in -mm would be good for that reason as well, just in case. In particular .data and .bss move from the text segment to the data segment and .data.cacheline_aligned .data.read_mostly are put in the data segment instead of a separate one. I think that it would be possible to exactly match the existing section to segment mapping and flags but it would be a more intrusive change and I'm not sure there is a reason for the existing layout other than it is what you get by default if you don't explicitly specify something else. If there is a reason for the existing layout then I will of course make the more intrusive change. If there is no reason we could probably drop the executable or writable flags from some segments but I don't know how much attention is paid to them anyway so it might not be worth the effort. The vsyscall related sections need to go in a different segment to the normal data segment and so I invented a "user" segment to contain them. I believe this should appear to be another data segment as far as the kernel is concerned so the flags are setup accordingly. The notes will be used in the Xen paravirt_ops backend to provide additional information to the domain builder. I am in the process of converting the xen-unstable kernels and tools over to this scheme at the moment to support this in the future. It has been suggested to me that the notes segment should have flags 0 (i.e. not readable) since it is only used by the loader and is not used at runtime. For now I went with a readable segment since that is what the i386 patch uses. AK: dropped NOTES addition right now because the needed infrastructure for that is not merged yet Signed-off-by: NIan Campbell <ian.campbell@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
In long mode the %cs is largely a relic. However there are a few cases like iret where it matters that we have a valid value. Without this patch it is possible to enter the kernel in startup_64 without setting %cs to a valid value. With this patch we don't care what %cs value we enter the kernel with, so long as the cs shadow register indicates it is a privileged code segment. Thanks to Magnus Damm for finding this problem and posting the first workable patch. I have moved the jump to set %cs down a few instructions so we don't need to take an extra jump. Which keeps the code simpler. Signed-of-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Based on patch from David Rientjes <rientjes@google.com>, but changed by AK. Optimizes the 64-bit hamming weight for x86_64 processors assuming they have fast multiplication. Uses five fewer bitops than the generic hweight64. Benchmark on one EMT64 showed ~25% speedup with 2^24 consecutive calls. Define a new ARCH_HAS_FAST_MULTIPLIER that can be set by other architectures that can also multiply fast. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Drop support for non e820 BIOS calls to get the memory map. The boot assembler code still has some support, but not the C code now. Signed-off-by: NAndi Kleen <ak@suse.de>
-
When compiling a 64-bit kernel on an Ubuntu 6.06 32bit system (whose GCC is also a cross-compiler for x86_64) I've seen that head.o is compiled as a 64-bit file (while it should not) and ld complaining about this during linking: [AK: it happens on all systems with new binutils] ld: warning: i386:x86-64 architecture of input file `arch/x86_64/boot/compressed/head.o' is incompatible with i386 output I've verified that removing -m64 from compilation flags to turn "-m64 -traditional -m32" into "-traditional -m32" fixes the issue. Signed-off-by: NPaolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
NMIs are not supposed to track the irq flags, but TRACE_IRQS_IRETQ did it anyways. Add a check. Cc: mingo@elte.hu Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Give the printks a consistent prefix. Add some missing white space. Cc: len.brown@intel.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Remove a define that was used only once - Remove the too large APIC ID check because we always support the full 8bit range of APICs. - Restructure code a bit to be simpler. Cc: len.brown@intel.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
ACPI went to great trouble to get the APIC version and CPU capabilities of different CPUs before passing them to the mpparser. But all that data was used was to print it out. Actually it even faked some data based on the boot cpu, not on the actual CPU being booted. Remove all this code because it's not needed. Cc: len.brown@intel.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Use normal pte accessors in change_page_attr() to access the PSE bits. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Fix the pte_exec/mkexec page table accessor functions to really use the NX bit. Previously they only checked the USER bit, but weren't actually used for anything. Then use them in change_page_attr() to manipulate the NX bit properly. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
It is correct for its only caller right now, but not for possible future others. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
And replace all users with ordinary smp_processor_id. The function was originally added to get some basic oops information out even if the GS register was corrupted. However that didn't work for some anymore because printk is needed to print the oops and it uses smp_processor_id() already. Also GS register corruptions are not particularly common anymore. This also helps the Xen port which would otherwise need to do this in a special way because it can't access the local APIC. Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rafael J. Wysocki 提交于
Detect the situations in which the time after a resume from disk would be earlier than the time before the suspend and prevent them from happening on x86_64. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Chuck Ebbert 提交于
In i386's entry.S, FIX_STACK() needs annotation because it replaces the stack pointer. And the rest of nmi() needs annotation in order to compile with these new annotations. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andrew Morton 提交于
Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Apparently IA64 needs it, but i386/x86-64 don't anymore since gcc 2.95 support was dropped. Nobody else on linux-arch requested keeping it generically Cc: tony.luck@intel.com Cc: kaos@sgi.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
From i386 x86-64 inherited code to force reserve the 640k-1MB area. That was needed on some old systems. But we generally trust the e820 map to be correct on 64bit systems and mark all areas that are not memory correctly. This patch will allow to use the real memory in there. Or rather the only way to find out if it's still needed is to try. So far I'm optimistic. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Dave Jones 提交于
This is now automatically included by kbuild. Signed-off-by: NDave Jones <davej@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Fernando Luis Vzquez Cao 提交于
A kprobe executes IRET early and that could cause NMI recursion and stack corruption. Note: This problem was originally spotted and solved by Andi Kleen in the x86_64 architecture. This patch is an adaption of his patch for i386. AK: Merged with current code which was a bit different. AK: Removed printk in nmi handler that shouldn't be there in the first time AK: Added missing include. AK: added KPROBES_END Signed-off-by: NFernando Vazquez <fernando@intellilink.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Fernando Luis Vzquez Cao 提交于
A kprobe executes IRET early and that could cause NMI recursion and stack corruption. Note: This problem was originally spotted by Andi Kleen. This patch adds fixes not included in his original patch. [AK: Jan Beulich originally discovered these classes of bugs] Signed-off-by: NFernando Vazquez <fernando@intellilink.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
Mark i386-specific cpu cache functions as __cpuinit. They are all only called from arch/i386/common.c:display_cache_info() that already is marked as __cpuinit. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
Mark i386-specific cpu identification functions as __cpuinit. They are all only called from arch/i386/common.c:identify_cpu() that already is marked as __cpuinit. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
Mark i386-specific cpu init functions as __cpuinit. They are all only called from arch/i386/common.c:identify_cpu() that already is marked as __cpuinit. This patch also removes the empty function init_umc(). Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
The different cpu_dev structures are all used from __cpuinit callers what I can tell. So mark them as __cpuinitdata instead of __initdata. I am a little bit unsure about arch/i386/common.c:default_cpu, especially when it comes to the purpose of this_cpu. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
The init_amd() function is only called from identify_cpu() which is already marked as __cpuinit. So let's mark it as __cpuinit. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
cpu_dev->c_identify is only called from arch/i386/common.c:identify_cpu(), and this after generic_identify() already has been called. There is no need to call this function twice and hook it in c_identify - but I may be wrong, please double check before applying. This patch also removes generic_identify() from cpu.h to avoid unnecessary future nesting. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Keith Mannthey 提交于
Fix for the x86_64 kernel mapping code. Without this patch the update path only inits one pmd_page worth of memory and tramples any entries on it. now the calling convention to phys_pmd_init and phys_init is to always pass a [pmd/pud] page not an offset within a page. Signed-off-by: Keith Mannthey<kmannth@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Andrew Morton 提交于
Implement pause_on_oops() on x86_64. AK: I redid the patch to do the oops_enter/exit in the existing oops_begin()/end(). This makes it much shorter. Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Arjan van de Ven 提交于
Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. [akpm@osdl.org: place new task_struct field next to jit_keyring to save space] Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Ashok Raj 提交于
This patch enables ACPI based physical CPU hotplug support for x86_64. Implements acpi_map_lsapic() and acpi_unmap_lsapic() to support physical cpu hotplug. Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Brice Goglin 提交于
Make an mmconfig warning print the bus id with a regular format. Signed-off-by: NBrice Goglin <brice@myri.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Magnus Damm 提交于
cyrix_identify() should be __init because transmeta_identify() is. tsc_init() is only called from setup_arch() which is marked as __init. These two section mismatches have been detected using running modpost on a vmlinux image compiled with CONFIG_RELOCATABLE=y. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
There is no need to duplicate the topology_init() function. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Eric W. Biederman 提交于
Now for a completely different but trivial approach. I just boot tested it with 255 CPUS and everything worked. Currently everything (except module data) we place in the per cpu area we know about at compile time. So instead of allocating a fixed size for the per_cpu area allocate the number of bytes we need plus a fixed constant for to be used for modules. It isn't perfect but it is much less of a pain to work with than what we are doing now. AK: fixed warning Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rusty Russell 提交于
The implementation comes from Zach's [RFC, PATCH 10/24] i386 Vmi descriptor changes: Descriptor and trap table cleanups. Add cleanly written accessors for IDT and GDT gates so the subarch may override them. Note that this allows the hypervisor to transparently tweak the DPL of the descriptors as well as the RPL of segments in those descriptors, with no unnecessary kernel code modification. It also allows the hypervisor implementation of the VMI to tweak the gates, allowing for custom exception frames or extra layers of indirection above the guest fault / IRQ handlers. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
And add proper CFI annotation to it which was previously impossible. This prevents "stuck" messages by the dwarf2 unwinder when reaching the top of a kernel stack. Includes feedback from Jan Beulich Cc: jbeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Adrian Bunk 提交于
enable_local_apic can now become static. Cc: len.brown@intel.com Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Adrian Bunk 提交于
acpi_force can become static. Cc: len.brown@intel.com Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de>
-