- 19 6月, 2009 2 次提交
-
-
由 Steven Rostedt 提交于
In case gcc does something funny with the stack frames, or the return from function code, we would like to detect that. An arch may implement passing of a variable that is unique to the function and can be saved on entering a function and can be tested when exiting the function. Usually the frame pointer can be used for this purpose. This patch also implements this for x86. Where it passes in the stack frame of the parent function, and will test that frame on exit. There was a case in x86_32 with optimize for size (-Os) where, for a few functions, gcc would align the stack frame and place a copy of the return address into it. The function graph tracer modified the copy and not the actual return address. On return from the funtion, it did not go to the tracer hook, but returned to the parent. This broke the function graph tracer, because the return of the parent (where gcc did not do this funky manipulation) returned to the location that the child function was suppose to. This caused strange kernel crashes. This test detected the problem and pointed out where the issue was. This modifies the parameters of one of the functions that the arch specific code calls, so it includes changes to arch code to accommodate the new prototype. Note, I notice that the parsic arch implements its own push_return_trace. This is now a generic function and the ftrace_push_return_trace should be used instead. This patch does not touch that code. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Helge Deller <deller@gmx.de> Cc: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 FUJITA Tomonori 提交于
Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NJoerg Roedel <joerg.roedel@amd.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 6月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
The Kconfig options of kmemcheck are hidden under arch/x86 which makes porting to other architectures harder. To fix that, move the Kconfig bits to lib/Kconfig.kmemcheck and introduce a CONFIG_HAVE_ARCH_KMEMCHECK config option that architectures can define. Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> [rebased for mainline inclusion] Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 29 5月, 2009 5 次提交
-
-
由 Hidetoshi Seto 提交于
Use tab. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andi Kleen 提交于
Allow user programs to write mce records into /dev/mcelog. When they do that a fake machine check is triggered to test the machine check code. This uses the MCE MSR wrappers added earlier. The implementation is straight forward. There is a struct mce record per CPU and the MCE MSR accesses get data from there if there is valid data injected there. This allows to test the machine check code relatively realistically because only the lowest layer of hardware access is intercepted. The test suite and injector are available at git://git.kernel.org/pub/scm/utils/cpu/mce/mce-test.git git://git.kernel.org/pub/scm/utils/cpu/mce/mce-inject.gitSigned-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andi Kleen 提交于
That's very easy using the infrastructure enabled earlier for MCE_INTEL Untested. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andi Kleen 提交于
Enable the 64bit MCE_INTEL code (CMCI, thermal interrupts) for 32bit NEW_MCE. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andi Kleen 提交于
The 64bit machine check code is in many ways much better than the 32bit machine check code: it is more specification compliant, is cleaner, only has a single code base versus one per CPU, has better infrastructure for recovery, has a cleaner way to communicate with user space etc. etc. Use the 64bit code for 32bit too. This is the second attempt to do this. There was one a couple of years ago to unify this code for 32bit and 64bit. Back then this ran into some trouble with K7s and was reverted. I believe this time the K7 problems (and some others) are addressed. I went over the old handlers and was very careful to retain all quirks. But of course this needs a lot of testing on old systems. On newer 64bit capable systems I don't expect much problems because they have been already tested with the 64bit kernel. I made this a CONFIG for now that still allows to select the old machine check code. This is mostly to make testing easier, if someone runs into a problem we can ask them to try with the CONFIG switched. The new code is default y for more coverage. Once there is confidence the 64bit code works well on older hardware too the CONFIG_X86_OLD_MCE and the associated code can be easily removed. This causes a behaviour change for 32bit installations. They now have to install the mcelog package to be able to log corrected machine checks. The 64bit machine check code only handles CPUs which support the standard Intel machine check architecture described in the IA32 SDM. The 32bit code has special support for some older CPUs which have non standard machine check architectures, in particular WinChip C3 and Intel P5. I made those a separate CONFIG option and kept them for now. The WinChip variant could be probably removed without too much pain, it doesn't really do anything interesting. P5 is also disabled by default (like it was before) because many motherboards have it miswired, but according to Alan Cox a few embedded setups use that one. Forward ported/heavily changed version of old patch, original patch included review/fixes from Thomas Gleixner, Bert Wesarg. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 16 5月, 2009 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Xiaohui Xin and some other folks at Intel have been looking into what's behind the performance hit of paravirt_ops when running native. It appears that the hit is entirely due to the paravirtualized spinlocks introduced by: | commit 8efcbab6 | Date: Mon Jul 7 12:07:51 2008 -0700 | | paravirt: introduce a "lock-byte" spinlock implementation The extra call/return in the spinlock path is somehow causing an increase in the cycles/instruction of somewhere around 2-7% (seems to vary quite a lot from test to test). The working theory is that the CPU's pipeline is getting upset about the call->call->locked-op->return->return, and seems to be failing to speculate (though I haven't seen anything definitive about the precise reasons). This doesn't entirely make sense, because the performance hit is also visible on unlock and other operations which don't involve locked instructions. But spinlock operations clearly swamp all the other pvops operations, even though I can't imagine that they're nearly as common (there's only a .05% increase in instructions executed). If I disable just the pv-spinlock calls, my tests show that pvops is identical to non-pvops performance on native (my measurements show that it is actually about .1% faster, but Xiaohui shows a .05% slowdown). Summary of results, averaging 10 runs of the "mmperf" test, using a no-pvops build as baseline: nopv Pv-nospin Pv-spin CPU cycles 100.00% 99.89% 102.18% instructions 100.00% 100.10% 100.15% CPI 100.00% 99.79% 102.03% cache ref 100.00% 100.84% 100.28% cache miss 100.00% 90.47% 88.56% cache miss rate 100.00% 89.72% 88.31% branches 100.00% 99.93% 100.04% branch miss 100.00% 103.66% 107.72% branch miss rt 100.00% 103.73% 107.67% wallclock 100.00% 99.90% 102.20% The clear effect here is that the 2% increase in CPI is directly reflected in the final wallclock time. (The other interesting effect is that the more ops are out of line calls via pvops, the lower the cache access and miss rates. Not too surprising, but it suggests that the non-pvops kernel is over-inlined. On the flipside, the branch misses go up correspondingly...) So, what's the fix? Paravirt patching turns all the pvops calls into direct calls, so _spin_lock etc do end up having direct calls. For example, the compiler generated code for paravirtualized _spin_lock is: <_spin_lock+0>: mov %gs:0xb4c8,%rax <_spin_lock+9>: incl 0xffffffffffffe044(%rax) <_spin_lock+15>: callq *0xffffffff805a5b30 <_spin_lock+22>: retq The indirect call will get patched to: <_spin_lock+0>: mov %gs:0xb4c8,%rax <_spin_lock+9>: incl 0xffffffffffffe044(%rax) <_spin_lock+15>: callq <__ticket_spin_lock> <_spin_lock+20>: nop; nop /* or whatever 2-byte nop */ <_spin_lock+22>: retq One possibility is to inline _spin_lock, etc, when building an optimised kernel (ie, when there's no spinlock/preempt instrumentation/debugging enabled). That will remove the outer call/return pair, returning the instruction stream to a single call/return, which will presumably execute the same as the non-pvops case. The downsides arel 1) it will replicate the preempt_disable/enable code at eack lock/unlock callsite; this code is fairly small, but not nothing; and 2) the spinlock definitions are already a very heavily tangled mass of #ifdefs and other preprocessor magic, and making any changes will be non-trivial. The other obvious answer is to disable pv-spinlocks. Making them a separate config option is fairly easy, and it would be trivial to enable them only when Xen is enabled (as the only non-default user). But it doesn't really address the common case of a distro build which is going to have Xen support enabled, and leaves the open question of whether the native performance cost of pv-spinlocks is worth the performance improvement on a loaded Xen system (10% saving of overall system CPU when guests block rather than spin). Still it is a reasonable short-term workaround. [ Impact: fix pvops performance regression when running native ] Analysed-by: N"Xin Xiaohui" <xiaohui.xin@intel.com> Analysed-by: N"Li Xin" <xin.li@intel.com> Analysed-by: N"Nakajima Jun" <jun.nakajima@intel.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: NH. Peter Anvin <hpa@zytor.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Xen-devel <xen-devel@lists.xensource.com> LKML-Reference: <4A0B62F7.5030802@goop.org> [ fixed the help text ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 5月, 2009 2 次提交
-
-
由 H. Peter Anvin 提交于
Remove the EXPERIMENTAL tag from CONFIG_RELOCATABLE and make it the default. Relocatable kernels have been used for a while now, and should now have identical semantics to non-relocatable kernels when loaded by a non-relocating bootloader. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Default CONFIG_PHYSICAL_START and CONFIG_PHYSICAL_ALIGN each to 16 MB, so that both non-relocatable and relocatable kernels are loaded at 16 MB by a non-relocating bootloader. This is somewhat hacky, but it appears to be the only way to do this that does not break some some set of existing bootloaders. We want to avoid the bottom 16 MB because of large page breakup, memory holes, and ZONE_DMA. Embedded systems may need to reduce this, or update their bootloaders to be aware of the new min_alignment field. [ Impact: performance improvement, avoids problems on some systems ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 09 5月, 2009 1 次提交
-
-
由 H. Peter Anvin 提交于
We only need to build relocations when we are building a 32-bit relocatable kernel. Rather than unnecessarily complicating the Makefiles, make an explicit Kbuild symbol for this. [ Impact: permits future cleanup ] Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Sam Ravnborg <sam@ravnborg.org>
-
- 02 5月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
move_irq_desc() will try to move irq_desc to the home node if the allocated one is not correct, in create_irq_nr(). ( This can happen on devices that are on different nodes that are using MSI, when drivers are loaded and unloaded randomly. ) v2: fix non-smp build v3: add NUMA_IRQ_DESC to eliminate #ifdefs [ Impact: improve irq descriptor locality on NUMA systems ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <49F95EAE.2050903@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 28 4月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
The original feature of migrating irq_desc dynamic was too fragile and was causing problems: it caused crashes on systems with lots of cards with MSI-X when user-space irq-balancer was enabled. We now have new patches that create irq_desc according to device numa node. This patch removes the leftover bits of the dynamic balancer. [ Impact: remove dead code ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <49F654AF.8000808@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 4月, 2009 1 次提交
-
-
由 Linus Torvalds 提交于
Look at the: diff -u arch/x86/boot/compressed/vmlinux_*.lds output and realize that they're basially exactly the same except for trivial naming differences, and the fact that the 64-bit version has a "pgtable" thing. So unify them. There's some trivial cleanup there (make the output format a Kconfig thing rather than doing #ifdef's for it, and unify both 32-bit and 64-bit BSS end to "_ebss", where 32-bit used to use the traditional "_end"), but other than that it's really very mindless and straigt conversion. For example, I think we should aim to remove "startup_32" vs "startup_64", and just call it "startup", and get rid of one more difference. I didn't do that. Also, notice the comment in the unified vmlinux.lds.S talks about "head_64" and "startup_32" which is an odd and incorrect mix, but that was actually what the old 64-bit only lds file had, so the confusion isn't new, and now that mixing is arguably more accurate thanks to the vmlinux.lds.S file being shared between the two cases ;) [ Impact: cleanup, unification ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 4月, 2009 1 次提交
-
-
由 Michael K. Johnson 提交于
$ cat x86-more-than-8-cpus-requires-bigsmp.patch Enforce NR_CPUS <= 8 limitation if X86_BIGSMP not set Configuring more than 8 logical CPUs on 32-bit x86 requires X86_BIGSMP to be set in order to boot successfully, if more than 8 logical CPUs are actually found at boot time. The X86_BIGSMP help text describes that it is required to be set if more than 8 CPUs are configured, but this was previously not enforced. This configuration error has affected multiple distributions: https://bugzilla.redhat.com/show_bug.cgi?id=480844 https://issues.rpath.com/browse/RPL-3022Signed-off-by: NMichael K Johnson <johnsonm@rpath.com> LKML-Reference: <20090422014448.GB32541@logo.rdu.rpath.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 21 4月, 2009 1 次提交
-
-
由 Suresh Siddha 提交于
Instead of selecting X86_X2APIC, make config X86_UV dependent on X86_X2APIC. This will eliminate enabling CONFIG_X86_X2APIC with out enabling CONFIG_INTR_REMAP. [ Impact: cleanup ] Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NJack Steiner <steiner@sgi.com> Cc: dwmw2@infradead.org Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Weidong Han <weidong.han@intel.com> LKML-Reference: <20090420200450.694598000@linux-os.sc.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 4月, 2009 1 次提交
-
-
由 Yinghai Lu 提交于
It causes crash on system with lots of cards with MSI-X when irq_balancer enabled... The patches fixing it were both complex and fragile, according to Eric they were also doing quite dangerous things to the hardware. Instead we now have patches that solve this problem via static NUMA node mappings - not dynamic allocation and balancing. The patches are much simpler than this method but are still too large outside of the merge window, so we mark the dynamic balancer as broken for now, and queue up the new approach for v2.6.31. [ Impact: deactivate broken kernel feature ] Reported-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> LKML-Reference: <49E68C41.4020801@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 4月, 2009 1 次提交
-
-
由 Jack Steiner 提交于
Impact: build fix Add Kconfig dependency on NUMA for enabling UV. Although it might be possible to configure non-NUMA UV systems, they are unsupported and not interesting. Much of the infrastructure for UV requires NUMA support. Signed-off-by: NJack Steiner <steiner@sgi.com> LKML-Reference: <20090403203942.GA20137@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 4月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
This build failure: | drivers/pci/dmar.c:47: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘dmar_tbl_size’ | drivers/pci/dmar.c:62: warning: ‘struct acpi_dmar_device_scope’ declared inside parameter list | drivers/pci/dmar.c:62: warning: its scope is only this definition or declaration, which is probably not what you want Triggers due to this commit: d0b03bd1: x2apic/intr-remap: decouple interrupt remapping from x2apic Which exposed a pre-existing but dormant fragility of the 'select X86_X2APIC' it moved around and turned that fragility into a build failure. Replace it with a proper 'depends on' construct. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> LKML-Reference: <1239084280.22733.404.camel@macbook.infradead.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 4月, 2009 1 次提交
-
-
由 Han, Weidong 提交于
interrupt remapping must be enabled before enabling x2apic, but interrupt remapping doesn't depend on x2apic, it can be used separately. Enable interrupt remapping in init_dmars even x2apic is not supported. [dwmw2: Update Kconfig accordingly, fix build with INTR_REMAP && !X2APIC] Signed-off-by: NWeidong Han <weidong.han@intel.com> Acked-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 01 4月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
CONFIG_DEBUG_PAGEALLOC is now supported by x86, powerpc, sparc64, and s390. This patch implements it for the rest of the architectures by filling the pages with poison byte patterns after free_pages() and verifying the poison patterns before alloc_pages(). This generic one cannot detect invalid page accesses immediately but invalid read access may cause invalid dereference by poisoned memory and invalid write access can be detected after a long delay. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 3月, 2009 1 次提交
-
-
由 Matt LaPlante 提交于
Signed-off-by: NMatt LaPlante <kernel1@cyberdogtech.com> Acked-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 26 3月, 2009 2 次提交
-
-
由 Thomas Gleixner 提交于
Impact: disable unused code x86 is fully converted to flow handlers. No need to keep the deprecated __do_IRQ() support active. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 David Woodhouse 提交于
If we fix a few highmem-related thinkos and a couple of printk format warnings, the Intel IOMMU driver works fine in a 32-bit kernel. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 17 3月, 2009 1 次提交
-
-
由 Joerg Roedel 提交于
Impact: make use of DMA-API debugging code in x86 Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 13 3月, 2009 3 次提交
-
-
由 Thomas Gleixner 提交于
Impact: disable unused code x86 is fully converted to flow handlers. No need to keep the deprecated __do_IRQ() support active. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Frederic Weisbecker 提交于
Provide the x86 trace callbacks to trace syscalls. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <1236401580-5758-3-git-send-email-fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jan Beulich 提交于
Impact: configuration bug fix Just like for x86-64, the range of widths valid for NODE_SHIFT is not unbounded. The upper bound 64-bit uses is definitely also an upper bound for 32-bit. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <49B90F12.76E4.0078.0@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 3月, 2009 2 次提交
-
-
由 Huang Ying 提交于
Impact: New major feature This patch add kexec jump support for x86_64. More information about kexec jump can be found in corresponding x86_32 support patch. Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jaswinder Singh Rajput 提交于
Introduce: cat /sys/kernel/debug/x86/cpu/* for Intel and AMD processors to view / debug the state of each CPU. By using this we can debug whole range of registers and other cpu information for debugging purpose and monitor how things are changing. This can be useful for developers as well as for users. Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com> LKML-Reference: <1236701373.3387.4.camel@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 2月, 2009 1 次提交
-
-
由 Kyle McMartin 提交于
Now that the obvious bugs have been worked out, specifically the iwlagn issue, and the write buffer errata, DMAR should be safe to turn back on by default. (We've had it on since those patches were first written a few weeks ago, without any noticeable bug reports (most have been due to the dma-api debug patchset.)) Signed-off-by: NKyle McMartin <kyle@redhat.com> Acked-by: NDavid Woodhouse <David.Woodhouse@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 2月, 2009 1 次提交
-
-
由 Andi Kleen 提交于
Impact: cleanup; preparation for feature The mce_amd_64 code has an own private MC threshold vector with an own interrupt handler. Since Intel needs a similar handler it makes sense to share the vector because both can not be active at the same time. I factored the common APIC handler code into a separate file which can be used by both the Intel or AMD MC code. This is needed for the next patch which adds an Intel specific CMCI handler. This patch should be a nop for AMD, it just moves some code around. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 24 2月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: cleaner and consistent bootmem wrapping By setting CONFIG_HAVE_ARCH_BOOTMEM_NODE, archs can define arch-specific wrappers for bootmem allocation. However, this is done a bit strangely in that only the high level convenience macros can be changed while lower level, but still exported, interface functions can't be wrapped. This not only is messy but also leads to strange situation where alloc_bootmem() does what the arch wants it to do but the equivalent __alloc_bootmem() call doesn't although they should be able to be used interchangeably. This patch updates bootmem such that archs can override / wrap the backend function - alloc_bootmem_core() instead of the highlevel interface functions to allow simpler and consistent wrapping. Also, HAVE_ARCH_BOOTMEM_NODE is renamed to HAVE_ARCH_BOOTMEM. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@saeurebad.de>
-
- 23 2月, 2009 2 次提交
-
-
由 Ingo Molnar 提交于
Impact: remove unused/broken code The Voyager subarch last built successfully on the v2.6.26 kernel and has been stale since then and does not build on the v2.6.27, v2.6.28 and v2.6.29-rc5 kernels. No actual users beyond the maintainer reported this breakage. Patches were sent and most of the fixes were accepted but the discussion around how to do a few remaining issues cleanly fizzled out with no resolution and the code remained broken. In the v2.6.30 x86 tree development cycle 32-bit subarch support has been reworked and removed - and the Voyager code, beyond the build problems already known, needs serious and significant changes and probably a rewrite to support it. CONFIG_X86_VOYAGER has been marked BROKEN then. The maintainer has been notified but no patches have been sent so far to fix it. While all other subarchs have been converted to the new scheme, voyager is still broken. We'd prefer to receive patches which clean up the current situation in a constructive way, but even in case of removal there is no obstacle to add that support back after the issues have been sorted out in a mutually acceptable fashion. So remove this inactive code for now. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ravikiran G Thirumalai 提交于
Change the CONFIG_X86_EXTENDED_PLATFORM help text to display the 32bit/64bit extended platform list. This is as suggested by Ingo. Signed-off-by: NRavikiran Thirumalai <kiran@scalex86.org> Cc: shai@scalex86.org Cc: "Benzi Galili (Benzi@ScaleMP.com)" <benzi@scalemp.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 2月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
Impact: use new dynamic allocator, unified access to static/dynamic percpu memory Convert to the new dynamic percpu allocator. * implement populate_extra_pte() for both 32 and 64 * update setup_per_cpu_areas() to use pcpu_setup_static() * define __addr_to_pcpu_ptr() and __pcpu_ptr_to_addr() * define config HAVE_DYNAMIC_PER_CPU_AREA Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 17 2月, 2009 2 次提交
-
-
由 Ingo Molnar 提交于
- make oprofile build - select X86_X2APIC from X86_UV - it relies on it - export genapic for oprofile modular build Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Impact: cleanup so could deselect x2apic and INTR_REMAP will select x2apic Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 2月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
This commit: aced3ce: x86/Voyager: remove HIBERNATION Kconfig quirk Made hibernation only available on UP - instead of making it available on all of x86. Fix it. Reported-by: NJiri Slaby <jirislaby@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-