- 11 7月, 2009 1 次提交
-
-
由 Alan Cox 提交于
No code changes except printk levels (although some of the K6 mtrr code might be clearer if there were a few as would splitting out some of the intel cache code). Signed-off-by: NAlan Cox <alan@linux.intel.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 6月, 2009 1 次提交
-
-
由 Vegard Nossum 提交于
The hooks that we modify are: - Page fault handler (to handle kmemcheck faults) - Debug exception handler (to hide pages after single-stepping the instruction that caused the page fault) Also redefine memset() to use the optimized version if kmemcheck is enabled. (Thanks to Pekka Enberg for minimizing the impact on the page fault handler.) As kmemcheck doesn't handle MMX/SSE instructions (yet), we also disable the optimized xor code, and rely instead on the generic C implementation in order to avoid false-positive warnings. Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no> [whitespace fixlet] Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NIngo Molnar <mingo@elte.hu> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegardno@ifi.uio.no>
-
- 18 5月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
should not call that if apic is disabled. [ Impact: fix crash on certain UP configs ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> LKML-Reference: <4A09CCBB.2000306@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 3月, 2009 1 次提交
-
-
由 Jan Beulich 提交于
Impact: 32/64-bit consolidation In a first step, this allows fixing phys_addr_valid() for PAE (which until now reported all addresses to be valid). Subsequently, this will also allow simplifying some MTRR handling code. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <49B9101E.76E4.0078.0@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 3月, 2009 1 次提交
-
-
由 Jan Beulich 提交于
Impact: debuggability and micro-optimization Putting whatever is possible into the (final) .rodata section increases the likelihood of catching memory corruption bugs early, and reduces false cache line sharing. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <49B90961.76E4.0078.0@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 3月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
Impact: cleanup and code size reduction on 64-bit This code is only applied to Intel Pentium and AMD K7 32-bit cpus. Move those checks to intel_init()/amd_init() for 32-bit so 64-bit will not build this code. Also change to use cpu_index check to see if we need to emit warning. Signed-off-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <49B377D2.8030108@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 2月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
If the TSC is constant and non-stop, also set it reliable. (We will turn this off in DMI quirks for multi-chassis systems) The performance number on a 16-way Nehalem system running 32 tasks that context-switch between each other is significant: sched_clock_stable=0 sched_clock_stable=1 .................... .................... 22.456925 million/sec 24.306972 million/sec [+8.2%] lmbench's "lat_ctx -s 0 2" goes from 0.63 microseconds to 0.59 microseconds - a 6.7% increase in context-switching performance. Perfstat of 1 million pipe context switches between two tasks: Performance counter stats for './pipe-test-1m': [before] [after] ............ ............ 37621.421089 36436.848378 task clock ticks (msecs) 0 0 CPU migrations (events) 2000274 2000189 context switches (events) 194 193 pagefaults (events) 8433799643 8171016416 CPU cycles (events) -3.21% 8370133368 8180999694 instructions (events) -2.31% 4158565 3895941 cache references (events) -6.74% 44312 46264 cache misses (events) 2349.287976 2279.362465 wall-time (msecs) -3.06% The speedup comes straight from the reduction in the instruction count. sched_clock_cpu() got simpler and the whole workload thus executes faster. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 2月, 2009 1 次提交
-
-
由 Vegard Nossum 提交于
Impact: Cleanup. No functional changes. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 2月, 2009 2 次提交
-
-
由 Ingo Molnar 提交于
Impact: cleanup Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Impact: cleanup Remove genapic.h and remove all references to it. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 2月, 2009 1 次提交
-
-
由 Pallipadi, Venkatesh 提交于
For Intel 7400 series CPUs, the recommendation is to use a clflush on the monitored address just before monitor and mwait pair [1]. This clflush makes sure that there are no false wakeups from mwait when the monitored address was recently written to. [1] "MONITOR/MWAIT Recommendations for Intel Xeon Processor 7400 series" section in specification update document of 7400 series http://download.intel.com/design/xeon/specupdt/32033601.pdfSigned-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 29 1月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Spread mach_apic.h definitions into genapic.h. (with some knock-on effects on smp.h and apic.h.) Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 1月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
Impact: re-enable CPUID unmasking on affected processors As far as I am capable of discerning from the documentation, MSR_IA32_MISC_ENABLE should be available for all family 0xf CPUs, as well as family 6 for model >= 0xd (newer Pentium M). The documentation on this isn't ideal, so we need to be on the lookout for errors, still. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 26 1月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Impact: fix boot hang on pre-model-15 Intel CPUs rdmsrl_safe() does not work in very early bootup code yet, because we dont have the pagefault handler installed yet so exception section does not get parsed. rdmsr_safe() will just crash and hang the bootup. So limit the MSR_IA32_MISC_ENABLE MSR read to those CPU types that support it. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 1月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
Impact: Cleanup When PAT was originally introduced, it was handled specially for a few reasons: - PAT bugs are hard to track down, so we wanted to maintain a whitelist of CPUs. - The i386 and x86-64 CPUID code was not yet unified. Both of these are now obsolete, so handle PAT like any other features, including ordinary feature blacklisting due to known bugs. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 22 1月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
Impact: Fixes crashes with misconfigured BIOSes on XSAVE hardware Avuton Olrich reported early boot crashes with v2.6.28 and bisected it down to dc1e35c6 ("x86, xsave: enable xsave/xrstor on cpus with xsave support"). If the CPUID limit bit in MSR_IA32_MISC_ENABLE is set, clear it to make all CPUID information available. This is required for some features to work, in particular XSAVE. Reported-and-bisected-by: NAvuton Olrich <avuton@gmail.com> Tested-by: NAvuton Olrich <avuton@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 19 12月, 2008 1 次提交
-
-
由 Suresh Siddha 提交于
Impact: fix wrong cache sharing detection on platforms supporting > 8 bit apicid's In the presence of extended topology eumeration leaf 0xb provided by cpuid, 32bit extended initial_apicid in cpuinfo_x86 struct will be updated by detect_extended_topology(). At this instance, we should also reinit the apicid (which could also potentially be extended to 32bit). With out this there will potentially be duplicate apicid's populated in the per cpu's cpuinfo_x86 struct, resulting in wrong cache sharing topology etc detected by init_intel_cacheinfo(). Reported-by: NDimitri Sivanich <sivanich@sgi.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NDimitri Sivanich <sivanich@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: <stable@kernel.org>
-
- 17 12月, 2008 1 次提交
-
-
由 Venki Pallipadi 提交于
Impact: reward non-stop TSCs with good TSC-based clocksources, etc. Add support for CPUID_0x80000007_Bit8 on Intel CPUs as well. This bit means that the TSC is invariant with C/P/T states and always runs at constant frequency. With Intel CPUs, we have 3 classes * CPUs where TSC runs at constant rate and does not stop n C-states * CPUs where TSC runs at constant rate, but will stop in deep C-states * CPUs where TSC rate will vary based on P/T-states and TSC will stop in deep C-states. To cover these 3, one feature bit (CONSTANT_TSC) is not enough. So, add a second bit (NONSTOP_TSC). CONSTANT_TSC indicates that the TSC runs at constant frequency irrespective of P/T-states, and NONSTOP_TSC indicates that TSC does not stop in deep C-states. CPUID_0x8000000_Bit8 indicates both these feature bit can be set. We still have CONSTANT_TSC _set_ and NONSTOP_TSC _not_set_ on some older Intel CPUs, based on model checks. We can use TSC on such CPUs for time, as long as those CPUs do not support/enter deep C-states. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 12月, 2008 1 次提交
-
-
由 Markus Metzger 提交于
Impact: cleanup Move the BTS bits from ptrace.c into ds.c. Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 11月, 2008 1 次提交
-
-
由 Markus Metzger 提交于
Impact: widen BTS/PEBS ptrace enablement to more CPU models Move BTS initialisation out of an #ifdef CONFIG_X86_64 guard. Assume core2 BTS and DS layout for future models of family 6 processors. Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 10月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 9月, 2008 3 次提交
-
-
由 Sheng Yang 提交于
The hardware virtualization technology evolves very fast. But currently it's hard to tell if your CPU support a certain kind of HW technology without digging into the source code. The patch add a new catagory in "flags" under /proc/cpuinfo. Now "flags" can indicate the (important) HW virtulization features the CPU supported as well. Current implementation just cover Intel VMX side. Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
consolidate the code some more. No change in functionality intended. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
prepare for unification. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 9月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 9月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
1. add c_x86_vendor into cpu_dev 2. change cpu_devs to static 3. check c_x86_vendor before put that cpu_dev into array 4. remove alignment for 64bit 5. order the sequence in cpu_devs according to link sequence... so could put intel at first, then amd... Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 8月, 2008 1 次提交
-
-
由 Suresh Siddha 提交于
cpuid leaf 0xb provides extended topology enumeration. This interface provides the 32-bit x2APIC id of the logical processor and it also provides a new mechanism to detect SMT and core siblings (which provides increased addressability). Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 8月, 2008 2 次提交
-
-
由 Thomas Petazzoni 提交于
arch/x86/kernel/cpu/intel.c defines a few fallback functions (cmpxchg_*()) that are used when the CPU doesn't support cmpxchg and/or cmpxchg64 natively. However, while defined in an Intel-specific file, these functions are also used for CPUs from other vendors when they don't support cmpxchg and/or cmpxchg64. This breaks the compilation when support for Intel CPUs is disabled. This patch moves these functions to a new arch/x86/kernel/cpu/cmpxchg.c file, unconditionally compiled when X86_32 is enabled. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: michael@free-electrons.com Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Petazzoni 提交于
movsl_mask is currently defined in arch/x86/kernel/cpu/intel.c, which contains code specific to Intel CPUs. However, movsl_mask is used in the non-CPU specific code in arch/x86/lib/usercopy_32.c, which breaks the compilation when support for Intel CPUs is compiled out. This patch solves this problem by moving movsl_mask's definition close to its users in arch/x86/lib/usercopy_32.c. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: michael@free-electrons.com Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 7月, 2008 1 次提交
-
-
由 Maciej W. Rozycki 提交于
Use alternatives to select the workaround for the 11AP Pentium erratum for the affected steppings on the fly rather than build time. Remove the X86_GOOD_APIC configuration option and replace all the calls to apic_write_around() with plain apic_write(), protecting accesses to the ESR as appropriate due to the 3AP Pentium erratum. Remove apic_read_around() and all its invocations altogether as not needed. Remove apic_write_atomic() and all its implementing backends. The use of ASM_OUTPUT2() is not strictly needed for input constraints, but I have used it for readability's sake. I had the feeling no one else was brave enough to do it, so I went ahead and here it is. Verified by checking the generated assembly and tested with both a 32-bit and a 64-bit configuration, also with the 11AP "feature" forced on and verified with gdb on /proc/kcore to work as expected (as an 11AP machines are quite hard to get hands on these days). Some script complained about the use of "volatile", but apic_write() needs it for the same reason and is effectively a replacement for writel(), so I have disregarded it. I am not sure what the policy wrt defconfig files is, they are generated and there is risk of a conflict resulting from an unrelated change, so I have left changes to them out. The option will get removed from them at the next run. Some testing with machines other than mine will be needed to avoid some stupid mistake, but despite its volume, the change is not really that intrusive, so I am fairly confident that because it works for me, it will everywhere. Signed-off-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 7月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
got this on a test-system: calling numaq_tsc_disable+0x0/0x39 NUMAQ: disabling TSC initcall numaq_tsc_disable+0x0/0x39 returned 0 after 0 msecs that's because we should not be using arch_initcall to call numaq_tsc_disable. need to call it in setup_arch before time_init()/tsc_init() and call it in init_intel() to make the cpu feature bits right. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 5月, 2008 1 次提交
-
-
由 Markus Metzger 提交于
Polish the ds.h interface and add support for PEBS. Ds.c is meant to be the resource allocator for per-thread and per-cpu BTS and PEBS recording. It is used by ptrace/utrace to provide execution tracing of debugged tasks. It will be used by profilers (e.g. perfmon2). It may be used by kernel debuggers to provide a kernel execution trace. Changes in detail: - guard DS and ptrace by CONFIG macros - separate DS and BTS more clearly - simplify field accesses - add functions to manage PEBS buffers - add simple protection/allocation mechanism - added support for Atom Opens: - buffer overflow handling Currently, only circular buffers are supported. This is all we need for debugging. Profilers would want an overflow notification. This is planned to be added when perfmon2 is made to use the ds.h interface. - utrace intermediate layer Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 4月, 2008 3 次提交
-
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paolo Ciarrocchi 提交于
Before: total: 37 errors, 16 warnings, 366 lines checked After: total: 0 errors, 15 warnings, 369 lines checked No code changed: arch/x86/kernel/cpu/intel.o: text data bss dec hex filename 1534 452 0 1986 7c2 intel.o.before 1534 452 0 1986 7c2 intel.o.after md5: 1ca348a06de6eb354c4b6ea715a57db5 intel.o.before.asm 1ca348a06de6eb354c4b6ea715a57db5 intel.o.after.asm Signed-off-by: NPaolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Petazzoni 提交于
Replace the hardcoded list of initialization functions for each CPU vendor by a list in an ELF section, which is read at initialization in arch/x86/kernel/cpu/cpu.c to fill the cpu_devs[] array. The ELF section, named .x86cpuvendor.init, is reclaimed after boot, and contains entries of type "struct cpu_vendor_dev" which associates a vendor number with a pointer to a "struct cpu_dev" structure. This first modification allows to remove all the VENDOR_init_cpu() functions. This patch also removes the hardcoded calls to early_init_amd() and early_init_intel(). Instead, we add a "c_early_init" member to the cpu_dev structure, which is then called if not NULL by the generic CPU initialization code. Unfortunately, in early_cpu_detect(), this_cpu is not yet set, so we have to use the cpu_devs[] array directly. This patch is part of the Linux Tiny project, and is needed for further patch that will allow to disable compilation of unused CPU support code. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 2月, 2008 1 次提交
-
-
由 Harvey Harrison 提交于
Fixes sparse warning: arch/x86/kernel/cpu/intel.c:48:15: warning: symbol 'ppro_with_ram_bug' was not declared. Should it be static? Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 4 次提交
-
-
由 Andi Kleen 提交于
Previously it was only run for Intel CPUs, but AMD Fam10h implements MWAIT too. This matches 64bit behaviour. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher... this caused boot failures on XMM1 & !XMM1 capable CPUs. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
According to Intel RDTSC can be always synchronized with LFENCE on all current CPUs. Implement the necessary CPUID bit for that. It is unclear yet if that is true for all future CPUs too, but if there's another way the kernel can be always updated. Cc: asit.k.mallick@intel.com Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-