- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for X86. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> cc: x86@kernel.org
-
- 04 12月, 2011 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
The idea behind commit d91ee586 ("cpuidle: replace xen access to x86 pm_idle and default_idle") was to have one call - disable_cpuidle() which would make pm_idle not be molested by other code. It disallows cpuidle_idle_call to be set to pm_idle (which is excellent). But in the select_idle_routine() and idle_setup(), the pm_idle can still be set to either: amd_e400_idle, mwait_idle or default_idle. This depends on some CPU flags (MWAIT) and in AMD case on the type of CPU. In case of mwait_idle we can hit some instances where the hypervisor (Amazon EC2 specifically) sets the MWAIT and we get: Brought up 2 CPUs invalid opcode: 0000 [#1] SMP Pid: 0, comm: swapper Not tainted 3.1.0-0.rc6.git0.3.fc16.x86_64 #1 RIP: e030:[<ffffffff81015d1d>] [<ffffffff81015d1d>] mwait_idle+0x6f/0xb4 ... Call Trace: [<ffffffff8100e2ed>] cpu_idle+0xae/0xe8 [<ffffffff8149ee78>] cpu_bringup_and_idle+0xe/0x10 RIP [<ffffffff81015d1d>] mwait_idle+0x6f/0xb4 RSP <ffff8801d28ddf10> In the case of amd_e400_idle we don't get so spectacular crashes, but we do end up making an MSR which is trapped in the hypervisor, and then follow it up with a yield hypercall. Meaning we end up going to hypervisor twice instead of just once. The previous behavior before v3.0 was that pm_idle was set to default_idle regardless of select_idle_routine/idle_setup. We want to do that, but only for one specific case: Xen. This patch does that. Fixes RH BZ #739499 and Ubuntu #881076 Reported-by: NStefan Bader <stefan.bader@canonical.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 4月, 2011 1 次提交
-
-
由 Avi Kivity 提交于
Recently, we had a build failure on !CONFIG_PARAVIRT due to a callback ->wbinvd() clashing with a macro wbinvd(). While we worked around the issue, avoid it in the future by changing the macro (and a few surrounding ones) to an inline function. Signed-off-by: NAvi Kivity <avi@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: http://lkml.kernel.org/r/1303632711-21662-1-git-send-email-avi@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 2月, 2011 1 次提交
-
-
由 Borislav Petkov 提交于
Those are unused since at least the beginning of git history. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <1298044056-31104-1-git-send-email-bp@amd64.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 7月, 2010 1 次提交
-
-
由 Alexander Duyck 提交于
This patch removes the CONFIG_MCORE2 check from around NET_IP_ALIGN. It is based on a suggestion from Andi Kleen. The assumption is that there are not any x86 cores where unaligned access is really slow, and this change would allow for a performance improvement to still exist on configurations that are not necessarily optimized for Core 2. Cc: Andi Kleen <ak@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 7月, 2010 1 次提交
-
-
由 Alexander Duyck 提交于
x86 architectures can handle unaligned accesses in hardware, and it has been shown that unaligned DMA accesses can be expensive on Nehalem architectures. As such we should overwrite NET_IP_ALIGN to resolve this issue. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2010 1 次提交
-
-
由 Andi Kleen 提交于
This fixes the -Os breaks with gcc 4.5 bug. rdtsc_barrier needs to be force inlined, otherwise user space will jump into kernel space and kill init. This also addresses http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44129 I believe. Signed-off-by: NAndi Kleen <ak@linux.intel.com> LKML-Reference: <20100618210859.GA10913@basil.fritz.box> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org>
-
- 10 2月, 2010 1 次提交
-
-
由 Serge E. Hallyn 提交于
Both x86-32 and x86-64 with 32-bit compat use ARCH_DLINFO_IA32, which defines two saved_auxv entries. But system.h only defines AT_VECTOR_SIZE_ARCH as 2 for CONFIG_IA32_EMULATION, not for CONFIG_X86_32. Fix that. Signed-off-by: NSerge E. Hallyn <serue@us.ibm.com> LKML-Reference: <20100209023502.GA15408@us.ibm.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 09 12月, 2009 1 次提交
-
-
由 Andy Isaacson 提交于
Unify x86_32 and x86_64 implementations of __show_regs() header, standardizing on the x86_64 format string in the process. Also, 32-bit will now call print_modules. Signed-off-by: NAndy Isaacson <adi@hexapodia.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Robert Hancock <hancockrwd@gmail.com> Cc: Richard Zidlicky <rz@linux-m68k.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20091208082942.GA27174@hexapodia.org> [ v2: resolved conflict ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 11月, 2009 2 次提交
-
-
由 Ingo Molnar 提交于
Make it readable in the source too, not just in the assembly output. No change in functionality. Cc: Brian Gerst <brgerst@gmail.com> LKML-Reference: <1259176706-5908-1-git-send-email-brgerst@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Brian Gerst 提交于
Zero the input register in the exception handler instead of using an extra register to pass in a zero value. Signed-off-by: NBrian Gerst <brgerst@gmail.com> LKML-Reference: <1259176706-5908-1-git-send-email-brgerst@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 11月, 2009 1 次提交
-
-
由 Masami Hiramatsu 提交于
Remove unused thread_return label from switch_to() macro on x86-64. Since this symbol cuts into schedule(), backtrace at the latter half of schedule() was always shown as thread_return(). Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com> Cc: systemtap <systemtap@sources.redhat.com> Cc: DLE <dle-develop@lists.sourceforge.net> LKML-Reference: <20091105160359.5181.26225.stgit@harusame> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 10月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Now that the return from alloc_percpu is compatible with the address of per-cpu vars, it makes sense to hand around the address of per-cpu variables. To make this sane, we remove the per_cpu__ prefix we used created to stop people accidentally using these vars directly. Now we have sparse, we can use that (next patch). tj: * Updated to convert stuff which were missed by or added after the original patch. * Kill per_cpu_var() macro. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
-
- 04 9月, 2009 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
The Intel Optimization Reference Guide says: In Intel Atom microarchitecture, the address generation unit assumes that the segment base will be 0 by default. Non-zero segment base will cause load and store operations to experience a delay. - If the segment base isn't aligned to a cache line boundary, the max throughput of memory operations is reduced to one [e]very 9 cycles. [...] Assembly/Compiler Coding Rule 15. (H impact, ML generality) For Intel Atom processors, use segments with base set to 0 whenever possible; avoid non-zero segment base address that is not aligned to cache line boundary at all cost. We can't avoid having a non-zero base for the stack-protector segment, but we can make it cache-aligned. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: <stable@kernel.org> LKML-Reference: <4AA01893.6000507@goop.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 7月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
Rename set_base()/set_limit to set_desc_base()/set_desc_limit() and rewrite them in C. These are naturally introduced by the idea of get_desc_base()/get_desc_limit(). The conversion actually found the bug in apm_32.c: bad_bios_desc is written at run-time, but it is defined const variable. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> LKML-Reference: <20090718151105.GC11294@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 02 3月, 2009 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Its the correct thing to do before using the struct in a prototype. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
With x86-32 and -64 using the same mechanism for managing the tss io permissions bitmap, large chunks of process*.c are trivially unifyable, including: - exit_thread - flush_thread - __switch_to_xtra (along with tsc enable/disable) and as bonus pickups: - sys_fork - sys_vfork (Note: asmlinkage expands to empty on x86-64) Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 2月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: fix x86_32 stack protector Brian Gerst found out that %gs was being initialized to stack_canary instead of stack_canary - 20, which basically gave the same canary value for all threads. Fixing this also exposed the following bugs. * cpu_idle() didn't call boot_init_stack_canary() * stack canary switching in switch_to() was being done too late making the initial run of a new thread use the old stack canary value. Fix all of them and while at it update comment in cpu_idle() about calling boot_init_stack_canary(). Reported-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 2月, 2009 3 次提交
-
-
由 Tejun Heo 提交于
Impact: stack protector for x86_32 Implement stack protector for x86_32. GDT entry 28 is used for it. It's set to point to stack_canary-20 and have the length of 24 bytes. CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs to the stack canary segment on entry. As %gs is otherwise unused by the kernel, the canary can be anywhere. It's defined as a percpu variable. x86_32 exception handlers take register frame on stack directly as struct pt_regs. With -fstack-protector turned on, gcc copies the whole structure after the stack canary and (of course) doesn't copy back on return thus losing all changed. For now, -fno-stack-protector is added to all files which contain those functions. We definitely need something better. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Impact: pt_regs changed, lazy gs handling made optional, add slight overhead to SAVE_ALL, simplifies error_code path a bit On x86_32, %gs hasn't been used by kernel and handled lazily. pt_regs doesn't have place for it and gs is saved/loaded only when necessary. In preparation for stack protector support, this patch makes lazy %gs handling optional by doing the followings. * Add CONFIG_X86_32_LAZY_GS and place for gs in pt_regs. * Save and restore %gs along with other registers in entry_32.S unless LAZY_GS. Note that this unfortunately adds "pushl $0" on SAVE_ALL even when LAZY_GS. However, it adds no overhead to common exit path and simplifies entry path with error code. * Define different user_gs accessors depending on LAZY_GS and add lazy_save_gs() and lazy_load_gs() which are noop if !LAZY_GS. The lazy_*_gs() ops are used to save, load and clear %gs lazily. * Define ELF_CORE_COPY_KERNEL_REGS() which always read %gs directly. xen and lguest changes need to be verified. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Impact: cleanup On x86_32, %gs is handled lazily. It's not saved and restored on kernel entry/exit but only when necessary which usually is during task switch but there are few other places. Currently, it's done by calling savesegment() and loadsegment() explicitly. Define get_user_gs(), set_user_gs() and task_user_gs() and use them instead. While at it, clean up register access macros in signal.c. This cleans up code a bit and will help future changes. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 1月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: cleanup In switch_to(), instead of taking offset to irq_stack_union.stack, make it a proper percpu access using __percpu_arg() and per_cpu_var(). Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 20 1月, 2009 2 次提交
-
-
由 Brian Gerst 提交于
Impact: x86_64 percpu area layout change, irq_stack now at the beginning Now that the PDA is empty except for the stack canary, it can be removed. The irqstack is moved to the start of the per-cpu section. If the stack protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack. tj: * updated subject * dropped asm relocation of irq_stack_ptr * updated comments a bit * rebased on top of stack canary changes Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Tejun Heo 提交于
Impact: no unnecessary stack canary swapping during context switch There's no point in moving stack_canary around during context switch if it's not enabled. Conditionalize it. Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 18 1月, 2009 2 次提交
-
-
由 Brian Gerst 提交于
Accessing memory through %gs should not use rip-relative addressing. Adding a P prefix for the argument tells gcc to not add (%rip) to the memory references. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Brian Gerst 提交于
Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 11 1月, 2009 1 次提交
-
-
由 Benjamin LaHaise 提交于
Impact: micro-optimization The patch below removes an unnecessary locked instruction from switch_to(). TIF_FORK is only ever set in copy_thread() on initial process creation, and gets cleared during the first scheduling of the process. As such, it is safe to use an unlocked test for the flag within switch_to(). Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 12月, 2008 1 次提交
-
-
由 Jaswinder Singh 提交于
Impact: cleanup In asm/system.h moved out __switch_to from CONFIG_X86_32 as it is common for both 32 and 64 bit. In asm/pctl.h defined sys_arch_prctl Signed-off-by: NJaswinder Singh <jaswinder@infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 11月, 2008 1 次提交
-
-
由 Ivan Vecera 提交于
Impact: really halt all CPUs on halt Function machine_halt (resp. native_machine_halt) is empty for x86 architectures. When command 'halt -f' is invoked, the message "System halted." is displayed but this is not really true because all CPUs are still running. There are also similar inconsistencies for other arches (some uses power-off for halt or forever-loop with IRQs enabled/disabled). IMO there should be used the same approach for all architectures OR what does the message "System halted" really mean? This patch fixes it for x86. Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 13 10月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
Segment registers are reloaded, so we should add a memory clobber. The generated assembly code is identical in my tests, but this doesn't mean it is necessarily true for all configurations/compilers. x86_64 already has the memory clobber. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 7月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 12 7月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
i spent a fair amount of time chasing a 64-bit bootup crash that manifested itself as bootup segfaults: S10network[1825]: segfault at 7f3e2b5d16b8 ip 00000031108748c9 sp 00007fffb9c14c70 error 4 in libc-2.7.so[3110800000+14d000] eventually causing init to die and panic the system: Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.26-rc9-tip #13878 after a maratonic bisection session, the bad commit turned out to be: | b7675791859075418199c7af86a116ea34eaf5bd is first bad commit | commit b7675791859075418199c7af86a116ea34eaf5bd | Author: Jeremy Fitzhardinge <jeremy@goop.org> | Date: Wed Jun 25 00:19:00 2008 -0400 | | x86: remove open-coded save/load segment operations | | This removes a pile of buggy open-coded implementations of savesegment | and loadsegment. after some more bisection of this patch itself, it turns out that what makes the difference are the savesegment() changes to __switch_to(). Taking a look at this portion of arch/x86/kernel/process_64.o revealed this crutial difference: | good: 99c: 8c e0 mov %fs,%eax | 99e: 89 45 cc mov %eax,-0x34(%rbp) | | bad: 99c: 8c 65 cc mov %fs,-0x34(%rbp) which is due to: | unsigned fsindex; | - asm volatile("movl %%fs,%0" : "=r" (fsindex)); | + savesegment(fs, fsindex); savesegment() is implemented as: #define savesegment(seg, value) \ asm("mov %%" #seg ",%0":"=rm" (value) : : "memory") note the "m" modifier - it allows GCC to generate the segment move into a memory operand as well. But regarding segment operands there's a subtle detail in the x86 instruction set: the above 16-bit moves are zero-extend, but only if it goes to a register. If it goes to a memory operand, -0x34(%rbp) in the above case, there's no zero-extend to 32-bit and the instruction will only save 16 bits instead of the intended 32-bit. The other 16 bits is random data - which can cause problems when that value is used later on. The solution is to only allow segment operands to go to registers. This fix allows my test-system to boot up without crashing. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2008 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Add "memory" clobbers to savesegment and loadsegment, since they can affect memory accesses and we never want the compiler to reorder them with respect to memory references. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 5月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 5月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
fix a bug noticed and fixed by pageexec@freemail.hu. if built with -fstack-protector-all then we'll have canary checks built into the __switch_to() function. That does not work well with the canary-switching code there: while we already use the %rsp of the new task, we still call __switch_to() whith the previous task's canary value in the PDA, hence the __switch_to() ssp prologue instructions will store the previous canary. Then we update the PDA and upon return from __switch_to() the canary check triggers and we panic. so update the canary after we have called __switch_to(), where we are at the same stackframe level as the last stackframe of the next (and now freshly current) task. Note: this means that we call __switch_to() [and its sub-functions] still with the old canary, but that is not a problem, both the previous and the next task has a high-quality canary. The only (mostly academic) disadvantage is that the canary of one task may leak onto the stack of another task, increasing the risk of information leaks, were an attacker able to read the stack of specific tasks (but not that of others). To solve this we'll have to reorganize the way we switch tasks, and move the PDA setting into the switch_to() assembly code. That will happen in another patch. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 5月, 2008 1 次提交
-
-
由 Alexey Starikovskiy 提交于
Signed-off-by: NAlexey Starikovskiy <astarikovskiy@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 4月, 2008 1 次提交
-
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-