- 20 7月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
Rename set_base()/set_limit to set_desc_base()/set_desc_limit() and rewrite them in C. These are naturally introduced by the idea of get_desc_base()/get_desc_limit(). The conversion actually found the bug in apm_32.c: bad_bios_desc is written at run-time, but it is defined const variable. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> LKML-Reference: <20090718151105.GC11294@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 3月, 2009 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Its the correct thing to do before using the struct in a prototype. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
With x86-32 and -64 using the same mechanism for managing the tss io permissions bitmap, large chunks of process*.c are trivially unifyable, including: - exit_thread - flush_thread - __switch_to_xtra (along with tsc enable/disable) and as bonus pickups: - sys_fork - sys_vfork (Note: asmlinkage expands to empty on x86-64) Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 2月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: fix x86_32 stack protector Brian Gerst found out that %gs was being initialized to stack_canary instead of stack_canary - 20, which basically gave the same canary value for all threads. Fixing this also exposed the following bugs. * cpu_idle() didn't call boot_init_stack_canary() * stack canary switching in switch_to() was being done too late making the initial run of a new thread use the old stack canary value. Fix all of them and while at it update comment in cpu_idle() about calling boot_init_stack_canary(). Reported-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 2月, 2009 3 次提交
-
-
由 Tejun Heo 提交于
Impact: stack protector for x86_32 Implement stack protector for x86_32. GDT entry 28 is used for it. It's set to point to stack_canary-20 and have the length of 24 bytes. CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs to the stack canary segment on entry. As %gs is otherwise unused by the kernel, the canary can be anywhere. It's defined as a percpu variable. x86_32 exception handlers take register frame on stack directly as struct pt_regs. With -fstack-protector turned on, gcc copies the whole structure after the stack canary and (of course) doesn't copy back on return thus losing all changed. For now, -fno-stack-protector is added to all files which contain those functions. We definitely need something better. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Impact: pt_regs changed, lazy gs handling made optional, add slight overhead to SAVE_ALL, simplifies error_code path a bit On x86_32, %gs hasn't been used by kernel and handled lazily. pt_regs doesn't have place for it and gs is saved/loaded only when necessary. In preparation for stack protector support, this patch makes lazy %gs handling optional by doing the followings. * Add CONFIG_X86_32_LAZY_GS and place for gs in pt_regs. * Save and restore %gs along with other registers in entry_32.S unless LAZY_GS. Note that this unfortunately adds "pushl $0" on SAVE_ALL even when LAZY_GS. However, it adds no overhead to common exit path and simplifies entry path with error code. * Define different user_gs accessors depending on LAZY_GS and add lazy_save_gs() and lazy_load_gs() which are noop if !LAZY_GS. The lazy_*_gs() ops are used to save, load and clear %gs lazily. * Define ELF_CORE_COPY_KERNEL_REGS() which always read %gs directly. xen and lguest changes need to be verified. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Impact: cleanup On x86_32, %gs is handled lazily. It's not saved and restored on kernel entry/exit but only when necessary which usually is during task switch but there are few other places. Currently, it's done by calling savesegment() and loadsegment() explicitly. Define get_user_gs(), set_user_gs() and task_user_gs() and use them instead. While at it, clean up register access macros in signal.c. This cleans up code a bit and will help future changes. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 1月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: cleanup In switch_to(), instead of taking offset to irq_stack_union.stack, make it a proper percpu access using __percpu_arg() and per_cpu_var(). Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 20 1月, 2009 2 次提交
-
-
由 Brian Gerst 提交于
Impact: x86_64 percpu area layout change, irq_stack now at the beginning Now that the PDA is empty except for the stack canary, it can be removed. The irqstack is moved to the start of the per-cpu section. If the stack protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack. tj: * updated subject * dropped asm relocation of irq_stack_ptr * updated comments a bit * rebased on top of stack canary changes Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Tejun Heo 提交于
Impact: no unnecessary stack canary swapping during context switch There's no point in moving stack_canary around during context switch if it's not enabled. Conditionalize it. Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 18 1月, 2009 2 次提交
-
-
由 Brian Gerst 提交于
Accessing memory through %gs should not use rip-relative addressing. Adding a P prefix for the argument tells gcc to not add (%rip) to the memory references. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Brian Gerst 提交于
Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 11 1月, 2009 1 次提交
-
-
由 Benjamin LaHaise 提交于
Impact: micro-optimization The patch below removes an unnecessary locked instruction from switch_to(). TIF_FORK is only ever set in copy_thread() on initial process creation, and gets cleared during the first scheduling of the process. As such, it is safe to use an unlocked test for the flag within switch_to(). Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 12月, 2008 1 次提交
-
-
由 Jaswinder Singh 提交于
Impact: cleanup In asm/system.h moved out __switch_to from CONFIG_X86_32 as it is common for both 32 and 64 bit. In asm/pctl.h defined sys_arch_prctl Signed-off-by: NJaswinder Singh <jaswinder@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 11月, 2008 1 次提交
-
-
由 Ivan Vecera 提交于
Impact: really halt all CPUs on halt Function machine_halt (resp. native_machine_halt) is empty for x86 architectures. When command 'halt -f' is invoked, the message "System halted." is displayed but this is not really true because all CPUs are still running. There are also similar inconsistencies for other arches (some uses power-off for halt or forever-loop with IRQs enabled/disabled). IMO there should be used the same approach for all architectures OR what does the message "System halted" really mean? This patch fixes it for x86. Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 13 10月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
Segment registers are reloaded, so we should add a memory clobber. The generated assembly code is identical in my tests, but this doesn't mean it is necessarily true for all configurations/compilers. x86_64 already has the memory clobber. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 7月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 12 7月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
i spent a fair amount of time chasing a 64-bit bootup crash that manifested itself as bootup segfaults: S10network[1825]: segfault at 7f3e2b5d16b8 ip 00000031108748c9 sp 00007fffb9c14c70 error 4 in libc-2.7.so[3110800000+14d000] eventually causing init to die and panic the system: Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.26-rc9-tip #13878 after a maratonic bisection session, the bad commit turned out to be: | b7675791859075418199c7af86a116ea34eaf5bd is first bad commit | commit b7675791859075418199c7af86a116ea34eaf5bd | Author: Jeremy Fitzhardinge <jeremy@goop.org> | Date: Wed Jun 25 00:19:00 2008 -0400 | | x86: remove open-coded save/load segment operations | | This removes a pile of buggy open-coded implementations of savesegment | and loadsegment. after some more bisection of this patch itself, it turns out that what makes the difference are the savesegment() changes to __switch_to(). Taking a look at this portion of arch/x86/kernel/process_64.o revealed this crutial difference: | good: 99c: 8c e0 mov %fs,%eax | 99e: 89 45 cc mov %eax,-0x34(%rbp) | | bad: 99c: 8c 65 cc mov %fs,-0x34(%rbp) which is due to: | unsigned fsindex; | - asm volatile("movl %%fs,%0" : "=r" (fsindex)); | + savesegment(fs, fsindex); savesegment() is implemented as: #define savesegment(seg, value) \ asm("mov %%" #seg ",%0":"=rm" (value) : : "memory") note the "m" modifier - it allows GCC to generate the segment move into a memory operand as well. But regarding segment operands there's a subtle detail in the x86 instruction set: the above 16-bit moves are zero-extend, but only if it goes to a register. If it goes to a memory operand, -0x34(%rbp) in the above case, there's no zero-extend to 32-bit and the instruction will only save 16 bits instead of the intended 32-bit. The other 16 bits is random data - which can cause problems when that value is used later on. The solution is to only allow segment operands to go to registers. This fix allows my test-system to boot up without crashing. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2008 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Add "memory" clobbers to savesegment and loadsegment, since they can affect memory accesses and we never want the compiler to reorder them with respect to memory references. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 5月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 5月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
fix a bug noticed and fixed by pageexec@freemail.hu. if built with -fstack-protector-all then we'll have canary checks built into the __switch_to() function. That does not work well with the canary-switching code there: while we already use the %rsp of the new task, we still call __switch_to() whith the previous task's canary value in the PDA, hence the __switch_to() ssp prologue instructions will store the previous canary. Then we update the PDA and upon return from __switch_to() the canary check triggers and we panic. so update the canary after we have called __switch_to(), where we are at the same stackframe level as the last stackframe of the next (and now freshly current) task. Note: this means that we call __switch_to() [and its sub-functions] still with the old canary, but that is not a problem, both the previous and the next task has a high-quality canary. The only (mostly academic) disadvantage is that the canary of one task may leak onto the stack of another task, increasing the risk of information leaks, were an attacker able to read the stack of specific tasks (but not that of others). To solve this we'll have to reorganize the way we switch tasks, and move the PDA setting into the switch_to() assembly code. That will happen in another patch. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 5月, 2008 1 次提交
-
-
由 Alexey Starikovskiy 提交于
Signed-off-by: NAlexey Starikovskiy <astarikovskiy@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 4月, 2008 4 次提交
-
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Liu Pingfan noticed that switch_to() clobbers more registers than its asm constraints specify. We get away with this due to luck mostly - schedule() by its nature only has 'local' state which gets reloaded automatically. Fix it nevertheless, we could hit this anytime. it turns out that with the extra constraints gcc manages to make schedule() even more compact: text data bss dec hex filename 28626 684 2640 31950 7cce sched.o.before 28613 684 2640 31937 7cc1 sched.o.after Reported-by: NLiu Pingfan <kernelfans@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Make the code more readable and more hackable: - use symbolic asm parameters - use readable indentation - add comments that explains the details No code changed: kernel/sched.o: text data bss dec hex filename 28626 684 2640 31950 7cce sched.o.before 28626 684 2640 31950 7cce sched.o.after md5: 2823d406c18b781975cdb2e7cfea0059 sched.o.before.asm 2823d406c18b781975cdb2e7cfea0059 sched.o.after.asm Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Pavel Machek 提交于
Comment says wmb is a nop, but it is implemented as lock addl below... Should it be compiled to nop if we know we are running on "good" Intel cpu? At least remove confusing comment for now. Signed-off-by: NPavel Machek <pavel@suse.cz> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 2月, 2008 3 次提交
-
-
由 Harvey Harrison 提交于
A few snuck back in to x86. Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 H. Peter Anvin 提交于
The volatile keyword was removed from the clflush() prototype in commit e34907ae; the comment there states: x86: remove volatile keyword from clflush. the p parameter is an explicit memory reference, and is enough to prevent gcc to being nasty here. The volatile seems completely not needed. This reflects incorrect understanding of the function of the volatile keyword there. The purpose of the volatile keyword is informing gcc that it is safe to pass a volatile pointer to this function. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 H. Peter Anvin 提交于
Use the _ASM_EXTABLE macro from <asm/asm.h>, instead of open-coding __ex_table entires in include/asm-x86/system.h. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 8 次提交
-
-
Since the cr8 manipulation functions ended up staying in the tree, they can't be defined just when PARAVIRT is off: In this patch, those functions are defined for the PARAVIRT case too. [ mingo@elte.hu: fixes ] Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
rdtsc_barrier() is a new barrier primitive that stops RDTSC speculation to avoid races with timer interrupts on other CPUs. It expands either to LFENCE (for Intel CPUs) or MFENCE (for AMD CPUs) which stops RDTSC on all currently known microarchitectures that implement SSE. On CPUs without SSE there is generally no RDTSC speculation. [ mingo@elte.hu: renamed it to rdtsc_barrier() and made it x86-only ] Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jan Beulich 提交于
The patch introducing this left out x86-64, despite it also having extra entries. [ mingo@elte.hu: re-merged this to after the unification patches. ] Signed-off-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patch finishes the unification of system.h file. i386 needs a constant to be defined, and it is defined inside an ifdef. Other than that, pretty much nothing but includes are left in the arch specific headers, and they are deleted. [ mingo@elte.hu: 64-bit needs the cr8 access inlines. ] Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patch moves the switch_to() macro to system.h As those macros are fundamentally different between i386 and x86_64, they are enclosed around an ifdef. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
The memory barrier parts of system.h are not very different between i386 and x86_64, the main difference being the availability of instructions, which we handle with the use of ifdefs. They are consolidated in system.h file, and then removed from the arch-specific headers. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patch moves the i386 control registers manipulation functions, wbinvd, and clts functions to system.h. They are essentially the same as in x86_64. With this, system.h paravirt comes for free in x86_64. [ mingo@elte.hu: reintroduced the cr8 bits - needed for resume images ] Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patch unifies the load_segment() macro, making them equal in both x86_64 and i386 architectures. The common version goes to system.h, and the old are deleted. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-