- 21 10月, 2015 1 次提交
-
-
由 Borislav Petkov 提交于
Move the generic help text explaining each CPU type and what to select under the "Processor Family" prompt and not under the M486 option. Also, amend it with the missing options. Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1445244077-25120-1-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 3月, 2014 1 次提交
-
-
由 Dave Jones 提交于
This was an optimization that made memcpy type benchmarks a little faster on ancient (Circa 1998) IDT Winchip CPUs. In real-life workloads, it wasn't even noticable, and I doubt anyone is running benchmarks on 16 year old silicon any more. Given this code has likely seen very little use over the last decade, let's just remove it. Signed-off-by: NDave Jones <davej@fedoraproject.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 2月, 2014 1 次提交
-
-
由 H. Peter Anvin 提交于
The NUMAQ support seems to be unmaintained, remove it. Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: David Rientjes <rientjes@google.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/n/530CFD6C.7040705@zytor.com
-
- 30 11月, 2012 8 次提交
-
-
由 H. Peter Anvin 提交于
Per Alan Cox, Nx586 did not support WP in supervisor mode, making it a 386 by Linux kernel standards. As such, it is too unsupported now. Reported-by: NAlan Cox <alan@lxorguk.ukuu.org.uk> Link: http://lkml.kernel.org/r/20121128205203.05868eab@pyramind.ukuu.org.ukSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 H. Peter Anvin 提交于
The check_popad() routine tested for a 386-specific bug, and never actually did anything useful with it anyway other than print a message. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-8-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
All 486+ CPUs support WP in supervisor mode, so remove the fallback 386 support code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-7-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
All 486+ CPUs support INVLPG, so remove the fallback 386 support code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-6-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
All 486+ CPUs support BSWAP, so remove the fallback 386 support code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-5-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
All 486+ CPUs support XADD, so remove the fallback 386 support code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-4-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
All 486+ CPUs support CMPXCHG, so remove the fallback 386 support code. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-3-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
Remove the CONFIG_M386 symbol from Kconfig so that it cannot be selected. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-2-git-send-email-hpa@linux.intel.com
-
- 13 9月, 2012 1 次提交
-
-
由 Jan Beulich 提交于
The main goal here is to have the resulting .config no carry any options that aren't enabled and can't be (i.e such where the default is "no" and can't be changed), so that if any such option later gets a user visible prompt, the user will actually be prompted on a "make ...oldconfig" rather than keeping the previously invisible option disabled. There's a little bit of other trivial cleanup mixed in here. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/504DEE19020000780009A285@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 3月, 2012 1 次提交
-
-
由 Jan Beulich 提交于
Building in support for either of these CPUs is pointless when e.g. M686 was selected (since such a kernel would use cmov instructions, which aren't available on these older CPUs). Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/4F58875A02000078000770E0@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 3月, 2012 1 次提交
-
-
由 Alex Shi 提交于
Currently cache alignment among nodes in the kernel is still 128 bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT default from old P4 processors. But now most modern x86 CPUs use the same size: 64 bytes from L1 to last level L3. so let's remove the incorrect setting, and directly use the L1 cache size to do SMP cache line alignment. This patch saves some memory space on kernel data, and it also improves the cache locality of kernel data. The System.map is quite different with/without this change: before patch after patch ... 000000000000b000 d tlb_vector_| 000000000000b000 d tlb_vector 000000000000b080 d cpu_loops_p| 000000000000b040 d cpu_loops_ ... Signed-off-by: NAlex Shi <alex.shi@intel.com> Cc: asit.k.mallick@intel.com Link: http://lkml.kernel.org/r/1330774047-18597-1-git-send-email-alex.shi@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 1月, 2012 2 次提交
-
-
由 Heiko Carstens 提交于
Move CMPXCHG_DOUBLE and rename it to HAVE_CMPXCHG_DOUBLE so architectures can simply select the option if it is supported. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NChristoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Heiko Carstens 提交于
Move CMPXCHG_LOCAL and rename it to HAVE_CMPXCHG_LOCAL so architectures can simply select the option if it is supported. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NChristoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 6月, 2011 1 次提交
-
-
由 Christoph Lameter 提交于
A simple implementation that only supports the word size and does not have a fallback mode (would require a spinlock). Add 32 and 64 bit support for cmpxchg_double. cmpxchg double uses the cmpxchg8b or cmpxchg16b instruction on x86 processors to compare and swap 2 machine words. This allows lockless algorithms to move more context information through critical sections. Set a flag CONFIG_CMPXCHG_DOUBLE to signal that support for double word cmpxchg detection has been build into the kernel. Note that each subsystem using cmpxchg_double has to implement a fall back mechanism as long as we offer support for processors that do not implement cmpxchg_double. Reviewed-by: NH. Peter Anvin <hpa@zytor.com> Cc: Tejun Heo <tj@kernel.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NChristoph Lameter <cl@linux.com> Link: http://lkml.kernel.org/r/20110601172614.173427964@linux.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 09 4月, 2011 1 次提交
-
-
由 Ian Campbell 提交于
Currently the option resides under X86_EXTENDED_PLATFORM due to historical nonstandard A20M# handling. However that is no longer the case and so Elan can be treated as part of the standard processor choice Kconfig option. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Link: http://lkml.kernel.org/r/1302245177.31620.47.camel@localhost.localdomain Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 18 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
They were generated by 'codespell' and then manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi> Cc: trivial@kernel.org LKML-Reference: <1300389856-1099-3-git-send-email-lucas.demarchi@profusion.mobi> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 3月, 2011 1 次提交
-
-
由 Jon Nettleton 提交于
Testing on the OLPC XO-1.5 (VIA C7-M 1000MHz CPU) shows a partial_csum() speed increase by a factor of 1.5 when we switch to the Pentium-optimized version. Signed-off-by: NDaniel Drake <dsd@laptop.org> Cc: dilinger@queued.net Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 3月, 2011 1 次提交
-
-
由 Jan Beulich 提交于
This isn't being referenced anywhere, and the selects done from it can be easily done together with all the other X86 ones. v2: Also adjust UML's Kconfig.x86. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <4D7603DA02000078000351C1@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 1月, 2011 1 次提交
-
-
由 David Rientjes 提交于
The meaning of CONFIG_EMBEDDED has long since been obsoleted; the option is used to configure any non-standard kernel with a much larger scope than only small devices. This patch renames the option to CONFIG_EXPERT in init/Kconfig and fixes references to the option throughout the kernel. A new CONFIG_EMBEDDED option is added that automatically selects CONFIG_EXPERT when enabled and can be used in the future to isolate options that should only be considered for embedded systems (RISC architectures, SLOB, etc). Calling the option "EXPERT" more accurately represents its intention: only expert users who understand the impact of the configuration changes they are making should enable it. Reviewed-by: NIngo Molnar <mingo@elte.hu> Acked-by: NDavid Woodhouse <david.woodhouse@intel.com> Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Greg KH <gregkh@suse.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jens Axboe <axboe@kernel.dk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Robin Holt <holt@sgi.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 12月, 2010 1 次提交
-
-
由 Christoph Lameter 提交于
Provide support as far as the hardware capabilities of the x86 cpus allow. Define CONFIG_CMPXCHG_LOCAL in Kconfig.cpu to allow core code to test for fast cpuops implementations. V1->V2: - Take out the definition for this_cpu_cmpxchg_8 and move it into a separate patch. tj: - Reordered ops to better follow this_cpu_* organization. - Renamed macro temp variables similar to their existing neighbours. Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 04 5月, 2010 1 次提交
-
-
由 Brian Gerst 提交于
The cache flush denied error is an erratum on some AMD 486 clones. If an invd instruction is executed in userspace, the processor calls exception 19 (13 hex) instead of #GP (13 decimal). On cpus where XMM is not supported, redirect exception 19 to do_general_protection(). Also, remove die_if_kernel(), since this was the last user. Signed-off-by: NBrian Gerst <brgerst@gmail.com> LKML-Reference: <1269176446-2489-2-git-send-email-brgerst@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 26 3月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Support for the PMU's BTS features has been upstreamed in v2.6.32, but we still have the old and disabled ptrace-BTS, as Linus noticed it not so long ago. It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without regard for other uses (perf) and doesn't provide the flexibility needed for perf either. Its users are ptrace-block-step and ptrace-bts, since ptrace-bts was never used and ptrace-block-step can be implemented using a much simpler approach. So axe all 3000 lines of it. That includes the *locked_memory*() APIs in mm/mlock.c as well. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Roland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Markus Metzger <markus.t.metzger@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20100325135413.938004390@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 1月, 2010 1 次提交
-
-
由 Linus Torvalds 提交于
This one is much faster than the spinlock based fallback rwsem code, with certain artifical benchmarks having shown 300%+ improvement on threaded page faults etc. Again, note the 32767-thread limit here. So this really does need that whole "make rwsem_count_t be 64-bit and fix the BIAS values to match" extension on top of it, but that is conceptually a totally independent issue. NOT TESTED! The original patch that this all was based on were tested by KAMEZAWA Hiroyuki, but maybe I screwed up something when I created the cleaned-up series, so caveat emptor.. Also note that it _may_ be a good idea to mark some more registers clobbered on x86-64 in the inline asms instead of saving/restoring them. They are inline functions, but they are only used in places where there are not a lot of live registers _anyway_, so doing for example the clobbers of %r8-%r11 in the asm wouldn't make the fast-path code any worse, and would make the slow-path code smaller. (Not that the slow-path really matters to that degree. Saving a few unnecessary registers is the _least_ of our problems when we hit the slow path. The instruction/cycle counting really only matters in the fast path). Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <alpine.LFD.2.00.1001121810410.17145@localhost.localdomain> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 06 1月, 2010 1 次提交
-
-
由 Rusty Russell 提交于
This reverts commit ae1b22f6. As Linus said in 982d007a: "There was something really messy about cmpxchg8b and clone CPU's, so if you enable it on other CPUs later, do it carefully." This breaks lguest for those configs, but we can fix that by emulating if we have to. Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=14884Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 1月, 2010 1 次提交
-
-
由 Rusty Russell 提交于
This reverts commit ae1b22f6. As Linus said in 982d007a: "There was something really messy about cmpxchg8b and clone CPU's, so if you enable it on other CPUs later, do it carefully." This breaks lguest for those configs, but we can fix that by emulating if we have to. Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=14884Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> LKML-Reference: <201001051248.49700.rusty@rustcorp.com.au> Cc: stable@kernel.org Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 19 11月, 2009 1 次提交
-
-
由 Jan Beulich 提交于
Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT (with inconsistent defaults), just having the latter suffices as the former can be easily calculated from it. To be consistent, also change X86_INTERNODE_CACHE_BYTES to X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA to account for last level cache line size (which here matters more than L1 cache line size). Finally, make sure the default value for X86_L1_CACHE_SHIFT, when X86_GENERIC is selected, is being seen before that for the individual CPU model options (other than on x86-64, where GENERIC_CPU is part of the choice construct, X86_GENERIC is a separate option on ix86). Signed-off-by: NJan Beulich <jbeulich@novell.com> Acked-by: NRavikiran Thirumalai <kiran@scalex86.org> Acked-by: NNick Piggin <npiggin@suse.de> LKML-Reference: <4AFD5710020000780001F8F0@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 10月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Commit 79e1dd05 "x86: Provide an alternative() based cmpxchg64()" broke lguest, even on systems which have cmpxchg8b support. The emulation code gets used until alternatives get run, but it contains native instructions, not their paravirt alternatives. The simplest fix is to turn this code off except for 386 and 486 builds. Reported-by: NJohannes Stezenbach <js@sig21.net> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NH. Peter Anvin <hpa@zytor.com> Cc: lguest@ozlabs.org Cc: Arjan van de Ven <arjan@infradead.org> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <200910261426.05769.rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 10月, 2009 1 次提交
-
-
由 Matteo Croce 提交于
Add CPU optimizations for AMD Geode LX. Signed-off-by: NMatteo Croce <technoboy85@gmail.com> LKML-Reference: <40101cc30910010811v5d15ff4cx9dd57c9cc9b4b045@mail.gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 01 10月, 2009 1 次提交
-
-
由 Linus Torvalds 提交于
Try to avoid the 'alternates()' code when we can statically determine that cmpxchg8b is fine. We already have that CONFIG_x86_CMPXCHG64 (enabled by PAE support), and we could easily also enable it for some of the CPU cases. Note, this patch only adds CMPXCHG8B for the obvious Intel CPU's, not for others. (There was something really messy about cmpxchg8b and clone CPU's, so if you enable it on other CPUs later, do it carefully.) If we avoid that asm-alternative thing when we can assume the instruction exists, we'll generate less support crud, and we'll avoid the whole issue with that extra 'nop' for padding instruction sizes etc. LKML-Reference: <alpine.LFD.2.01.0909301743150.6996@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 8月, 2009 1 次提交
-
-
由 Tobias Doerffel 提交于
Add another option when selecting CPU family so the kernel can be optimized for Intel Atom CPUs. If GCC supports tuning options for Intel Atom they will be used. Signed-off-by: NTobias Doerffel <tobias.doerffel@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> LKML-Reference: <1251018457-19157-1-git-send-email-tobias.doerffel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 6月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
This reverts commit 7e0bfad2. A late objection to the ABI has arrived: http://lkml.org/lkml/2009/6/10/253 Keep the ABI disabled out of caution, to not create premature user-space expectations. While the hw-branch-tracing variant uses and tests the BTS code. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Markus Metzger <markus.t.metzger@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 4月, 2009 1 次提交
-
-
由 Markus Metzger 提交于
The races found by Oleg Nesterov have been fixed. Reenable branch trace support. Signed-off-by: NMarkus Metzger <markus.t.metzger@intel.com> Acked-by: NOleg Nesterov <oleg@redhat.com> LKML-Reference: <20090424094448.A30216@sedona.ch.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 4月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Oleg Nesterov found a couple of races in the ptrace-bts code and fixes are queued up for it but they did not get ready in time for the merge window. We'll merge them in v2.6.31 - until then mark the feature as CONFIG_BROKEN. There's no user-space yet making use of this so it's not a big issue. Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 14 3月, 2009 1 次提交
-
-
there should be no difference, except: * the 64bit variant now also initializes the padlock unit. * ->c_early_init() is executed again from ->c_init() * the 64bit fixups made into 32bit path. Signed-off-by: NSebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: herbert@gondor.apana.org.au LKML-Reference: <1237029843-28076-2-git-send-email-sebastian@breakpoint.cc> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 06 2月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
- Consistent alignment of help text - Use the ---help--- keyword everywhere consistently as a visual separator - fix whitespace mismatches Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 2月, 2009 1 次提交
-
-
由 Borislav Petkov 提交于
Impact: cleanup Some lines exceed the 80 char width making them unreadable. Signed-off-by: NBorislav Petkov <petkovbb@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 1月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Fix: arch/x86/mm/tlb.c:47: error: ‘CONFIG_X86_INTERNODE_CACHE_BYTES’ undeclared here (not in a function) The CONFIG_X86_INTERNODE_CACHE_BYTES symbol is only defined on 64-bit, because vsmp support is 64-bit only. Define it on 32-bit too - where it will always be equal to X86_L1_CACHE_BYTES. Also move the default of X86_L1_CACHE_BYTES (which is separate from the more commonly used L1_CACHE_SHIFT kconfig symbol) from 128 bytes to 64 bytes. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-