- 26 9月, 2006 40 次提交
-
-
由 Rusty Russell 提交于
We allow for the fact that the guest kernel may not run in ring 0. This requires some abstraction in a few places when setting %cs or checking privilege level (user vs kernel). This is Chris' [RFC PATCH 15/33] move segment checks to subarch, except rather than using #define USER_MODE_MASK which depends on a config option, we use Zach's more flexible approach of assuming ring 3 == userspace. I also used "get_kernel_rpl()" over "get_kernel_cs()" because I think it reads better in the code... 1) Remove the hardcoded 3 and introduce #define SEGMENT_RPL_MASK 3 2) Add a get_kernel_rpl() macro, and don't assume it's zero. And: Clean up of patch for letting kernel run other than ring 0: a. Add some comments about the SEGMENT_IS_*_CODE() macros. b. Add a USER_RPL macro. (Code was comparing a value to a mask in some places and to the magic number 3 in other places.) c. Add macros for table indicator field and use them. d. Change the entry.S tests for LDT stack segment to use the macros Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rusty Russell 提交于
Abstract sensitive instructions in assembler code, replacing them with macros (which currently are #defined to the native versions). We use long names: assembler is case-insensitive, so if something goes wrong and macros do not expand, it would assemble anyway. Resulting object files are exactly the same as before. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Jeremy Fitzhardinge 提交于
I just added type checking for assignments the PDA in the i386 PDA code. Here's the x86-64 equivalent. (Obviously this doesn't contain the latest x86-64 PDA change.) Signed-off-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Apparently that is the more official way to get numbers without $ in inline assembly Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Arjan van de Ven 提交于
This patch adds the per thread cookie field to the task struct and the PDA. Also it makes sure that the PDA value gets the new cookie value at context switch, and that a new task gets a new cookie at task creation time. Signed-off-by: NArjan van Ven <arjan@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndi Kleen <ak@suse.de> CC: Andi Kleen <ak@suse.de>
-
由 Arjan van de Ven 提交于
Change the comments in the pda structure to make the first fields to have their offset documented and to have the comments aligned. The stack protector series needs a field at offset 40 (gcc ABI); annotate upto 40 for that reason. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndi Kleen <ak@suse.de> CC: Andi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
kexec: Avoid overwriting the current pgd (V4, i386) This patch upgrades the i386-specific kexec code to avoid overwriting the current pgd. Overwriting the current pgd is bad when CONFIG_CRASH_DUMP is used to start a secondary kernel that dumps the memory of the previous kernel. The code introduces a new set of page tables. These tables are used to provide an executable identity mapping without overwriting the current pgd. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Magnus Damm 提交于
kexec: Avoid overwriting the current pgd (V4, x86_64) This patch upgrades the x86_64-specific kexec code to avoid overwriting the current pgd. Overwriting the current pgd is bad when CONFIG_CRASH_DUMP is used to start a secondary kernel that dumps the memory of the previous kernel. The code introduces a new set of page tables. These tables are used to provide an executable identity mapping without overwriting the current pgd. Signed-off-by: NMagnus Damm <magnus@valinux.co.jp> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Keith Owens 提交于
Remove most of the special cases for the debug IST stack. This is a follow on clean up patch, it requires the bug fix patch that adds orig_ist. Signed-off-by: NKeith Owens <kaos@ocs.com.au> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 H. Peter Anvin 提交于
The EDD code would scan the command line as a fixed array, without taking account of either whitespace, null-termination, the old command-line protocol, late overrides early, or the fact that the command line may not be reachable from INITSEG. This should fix those problems, and enable us to use a longer command line. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Based on a idea by Jeremy Fitzhardinge: Replace the volatiles and memory clobbers in the PDA access with telling gcc about access to a proxy PDA structure that doesn't actually exist. But the dummy accesses give a defined ordering for read/write accesses. Also add some memory barriers to the early GS initialization to make sure no PDA access is moved before it. Advantage is some .text savings (probably most from better code for accessing "current"): text data bss dec hex filename 4845647 1223688 615864 6685199 66020f vmlinux 4837780 1223688 615864 6677332 65e354 vmlinux-pda 1.2% smaller code Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
They cannot be actually freed because the FACS table has a shared-with-the-BIOS lock. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Based on patch from David Rientjes <rientjes@google.com>, but changed by AK. Optimizes the 64-bit hamming weight for x86_64 processors assuming they have fast multiplication. Uses five fewer bitops than the generic hweight64. Benchmark on one EMT64 showed ~25% speedup with 2^24 consecutive calls. Define a new ARCH_HAS_FAST_MULTIPLIER that can be set by other architectures that can also multiply fast. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Drop support for non e820 BIOS calls to get the memory map. The boot assembler code still has some support, but not the C code now. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Remove a define that was used only once - Remove the too large APIC ID check because we always support the full 8bit range of APICs. - Restructure code a bit to be simpler. Cc: len.brown@intel.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Use normal pte accessors in change_page_attr() to access the PSE bits. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Fix the pte_exec/mkexec page table accessor functions to really use the NX bit. Previously they only checked the USER bit, but weren't actually used for anything. Then use them in change_page_attr() to manipulate the NX bit properly. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
And replace all users with ordinary smp_processor_id. The function was originally added to get some basic oops information out even if the GS register was corrupted. However that didn't work for some anymore because printk is needed to print the oops and it uses smp_processor_id() already. Also GS register corruptions are not particularly common anymore. This also helps the Xen port which would otherwise need to do this in a special way because it can't access the local APIC. Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Chuck Ebbert 提交于
In i386's entry.S, FIX_STACK() needs annotation because it replaces the stack pointer. And the rest of nmi() needs annotation in order to compile with these new annotations. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Apparently IA64 needs it, but i386/x86-64 don't anymore since gcc 2.95 support was dropped. Nobody else on linux-arch requested keeping it generically Cc: tony.luck@intel.com Cc: kaos@sgi.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Dave Jones 提交于
This is now automatically included by kbuild. Signed-off-by: NDave Jones <davej@redhat.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Arjan van de Ven 提交于
Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. [akpm@osdl.org: place new task_struct field next to jit_keyring to save space] Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Ashok Raj 提交于
This patch enables ACPI based physical CPU hotplug support for x86_64. Implements acpi_map_lsapic() and acpi_unmap_lsapic() to support physical cpu hotplug. Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org>
-
由 Eric W. Biederman 提交于
Now for a completely different but trivial approach. I just boot tested it with 255 CPUS and everything worked. Currently everything (except module data) we place in the per cpu area we know about at compile time. So instead of allocating a fixed size for the per_cpu area allocate the number of bytes we need plus a fixed constant for to be used for modules. It isn't perfect but it is much less of a pain to work with than what we are doing now. AK: fixed warning Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rusty Russell 提交于
The implementation comes from Zach's [RFC, PATCH 10/24] i386 Vmi descriptor changes: Descriptor and trap table cleanups. Add cleanly written accessors for IDT and GDT gates so the subarch may override them. Note that this allows the hypervisor to transparently tweak the DPL of the descriptors as well as the RPL of segments in those descriptors, with no unnecessary kernel code modification. It also allows the hypervisor implementation of the VMI to tweak the gates, allowing for custom exception frames or extra layers of indirection above the guest fault / IRQ handlers. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Adrian Bunk 提交于
enable_local_apic can now become static. Cc: len.brown@intel.com Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Prasanna S.P 提交于
This patch moves the entry.S:error_entry to .kprobes.text section, since code marked unsafe for kprobes jumps directly to entry.S::error_entry, that must be marked unsafe as well. This patch also moves all the ".previous.text" asm directives to ".previous" for kprobes section. AK: Following a similar i386 patch from Chuck Ebbert AK: Also merged Jeremy's fix in. +From: Jeremy Fitzhardinge <jeremy@goop.org> KPROBE_ENTRY does a .section .kprobes.text, and expects its users to do a .previous at the end of the function. Unfortunately, if any code within the function switches sections, for example .fixup, then the .previous ends up putting all subsequent code into .fixup. Worse, any subsequent .fixup code gets intermingled with the code its supposed to be fixing (which is also in .fixup). It's surprising this didn't cause more havok. The fix is to use .pushsection/.popsection, so this stuff nests properly. A further cleanup would be to get rid of all .section/.previous pairs, since they're inherently fragile. +From: Chuck Ebbert <76306.1226@compuserve.com> Because code marked unsafe for kprobes jumps directly to entry.S::error_code, that must be marked unsafe as well. The easiest way to do that is to move the page fault entry point to just before error_code and let it inherit the same section. Also moved all the ".previous" asm directives for kprobes sections to column 1 and removed ".text" from them. Signed-off-by: NChuck Ebbert <76306.1226@compuserve.com> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Cc: jbeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Following x86-64 patches. Reuses code from them in fact. Convert the standard backtracer to do all output using callbacks. Use the x86-64 stack tracer implementation that uses these callbacks to implement the stacktrace interface. This allows to use the new dwarf2 unwinder for stacktrace and get better backtraces. Cc: mingo@elte.hu Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
This unifies the standard backtracer and the new stacktrace in memory backtracer. The standard one is converted to use callbacks and then reimplement stacktrace using new callbacks. The main advantage is that stacktrace can now use the new dwarf2 unwinder and avoid false positives in many cases. I kept it simple to make sure the standard backtracer stays reliable. Cc: mingo@elte.hu Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Lockdep can call the dwarf2 unwinder early, and the dwarf2 code uses safe_smp_processor_id which tries to access the local APIC page. But that doesn't work before the APIC code has set up its fixmap. Check for this case and always return boot cpu then. Cc: jbeulich@novell.com Cc: mingo@elte.hu Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Remove unused all_contexts parameter No caller used it - Move skip argument into the structure (needed for followon patches) Cc: mingo@elte.hu Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
And move one into proto.h Cc: len.brown@intel.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Instead of hackish manual parsing Requires earlier i386 patchkit, but also fixes i386 early_printk again. I removed some obsolete really early parameters which didn't do anything useful. Also made a few parameters that needed it early (mostly oops printing setup) Also removed one panic check that wasn't visible without early console anyways (the early console is now initialized after that panic) This cleans up a lot of code. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Rusty Russell 提交于
This patch replaces the open-coded early commandline parsing throughout the i386 boot code with the generic mechanism (already used by ppc, powerpc, ia64 and s390). The code was inconsistent with whether it deletes the option from the cmdline or not, meaning some of these will get passed through the environment into init. This transformation is mainly mechanical, but there are some notable parts: 1) Grammar: s/linux never set's it up/linux never sets it up/ 2) Remove hacked-in earlyprintk= option scanning. When someone actually implements CONFIG_EARLY_PRINTK, then they can use early_param(). [AK: actually it is implemented, but I'm adding the early_param it in the next x86-64 patch] 3) Move declaration of generic_apic_probe() from setup.c into asm/apic.h 4) Various parameters now moved into their appropriate files (thanks Andi). 5) All parse functions which examine arg need to check for NULL, except one where it has subtle humor value. AK: readded acpi_sci handling which was completely dropped AK: moved some more variables into acpi/boot.c Cc: len.brown@intel.com Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Inline spinlock strings into their inline functions - Convert macros to typesafe inlines - Replace some leftover __asm__ __volatile__s with asm volatile Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
- Inline spinlock strings into their inline functions - Convert macros to typesafe inlines - Replace some leftover __asm__ __volatile__s with asm volatile Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Lock sections cannot be handled by the dwarf2 unwinder. Disadvantage is a taken branch in the hot path. Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Lock sections don't work the new dwarf2 unwinder This generates slightly smaller code. It adds one more taken jump to the fast path. Cc: jbeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de>
-
由 Andi Kleen 提交于
Lock sections don't work the new dwarf2 unwinder This generates slightly smaller code. It adds one more taken jump to the fast path. Also move the trampolines into semaphore.S and add proper CFI annotations. Cc: jbeulich@novell.com Signed-off-by: NAndi Kleen <ak@suse.de>
-