- 15 1月, 2011 1 次提交
-
-
由 John Stultz 提交于
Konrad Wilk reported that the new delayed calibration crashes with a divide by zero on Xen. The reason is that Xen sets the pmtimer address, but reading from it returns 0xffffff. That results in the ref_start and ref_stop value being the same, so the delta is zero which causes the divide by zero later in the calculation. The conditional (!hpet && !ref_start && !ref_stop) which sanity checks the calibration reference values doesn't really make sense. If the refs are null, but hpet is on, we still want to break out. The div by zero would be possible to trigger by chance if both reads from the hardware provided the exact same value (due to hardware wrapping). So checking if both the ref values are the same should handle if we don't have hardware (both null) or if they are the same value (either by invalid hardware, or by chance), avoiding the div by zero issue. [ tglx: Applied the same fix to native_calibrate_tsc() where this check was copied from ] Reported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NJohn Stultz <johnstul@us.ibm.com> LKML-Reference: <1295024788-15619-1-git-send-email-johnstul@us.ibm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 1月, 2011 5 次提交
-
-
由 Arjan van de Ven 提交于
The x86 fixmaps need to be all together... unfortunately the VRTC one was misplaced. This patch makes sure the MRST VRTC fixmap is put prior to the __end_of_permanent_fixed_addresses marker. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAlan Cox <alan@linux.intel.com> LKML-Reference: <20110111105544.24448.27607.stgit@bob.linux.org.uk> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Alek Du 提交于
We need this for x86 MID platforms where GPIO interrupts are used. No special magic is needed so the default 1:1 behaviour will do nicely. Signed-off-by: NAlek Du <alek.du@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com> Signed-off-by: NAlan Cox <alan@linux.intel.com> LKML-Reference: <20110111105439.24448.69863.stgit@bob.linux.org.uk> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jack Steiner 提交于
Westmere processors use a different algorithm for assigning APICIDs on SGI UV systems. The location of the node number within the apicid is now a function of the processor type. Signed-off-by: NJack Steiner <steiner@sgi.com> LKML-Reference: <20110110195210.GA18737@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jan Beulich 提交于
While both methods should work equivalently well for the native case, the Xen Dom0 case can't reliably work with the MSR one, since there's no guarantee that the virtual CPUs it has available fully cover all necessary physical ones. As per the suggestion of Robert Richter the patch only adds the PCI method, but leaves the MSR one as a fallback to cover new systems the PCI IDs of which may not have got added to the code base yet. The only change in v2 is the breaking out of the new CPI initialization method into a separate function, as requested by Ingo. Signed-off-by: NJan Beulich <jbeulich@novell.com> Acked-by: NRobert Richter <robert.richter@amd.com> Cc: Andreas Herrmann3 <Andreas.Herrmann3@amd.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> LKML-Reference: <4D2B3FD7020000780002B67D@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
commit a8760eca (x86: Check tsc available/disabled in the delayed init function) missed to prevent the setup of the delayed init function in case the initial tsc calibration failed. This results in the same divide by zero bug as we have seen without the tsc disabled check. Skip the delayed work setup when tsc_khz (the initial calibration value) is 0. Bisected-and-tested-by: NKirill A. Shutemov <kas@openvz.org> Cc: John Stultz <john.stultz@linaro.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 1月, 2011 1 次提交
-
-
由 Pierre Tardy 提交于
Latest atom socs(penwell) does not have hpet timer. As their local APIC timer is clocked at 400KHZ, and the current code limit their Initial Counter register to 23 bits, they cannot sleep more than 1.34 seconds which leads to ~2 spurious wakeup per second (1 per thread) These SOCs support 32bit timer so we change the max_delta to at least 31bits. So we can at least sleep for 300 seconds. We could not find any previous chip errata where lapic would only have 23 bit precision As powertop is suggesting to activate HPET to "sleep longer", this could mean this problem is already known. Problem is here since very first implementation of lapic timer as a clock event e9e2cdb4 [PATCH] clockevents: i386 drivers. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NPierre Tardy <pierre.tardy@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Adrian Bunk <bunk@stusta.de> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: john stultz <johnstul@us.ibm.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Andi Kleen <ak@suse.de> LKML-Reference: <1294327409-19426-1-git-send-email-pierre.tardy@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 1月, 2011 1 次提交
-
-
由 Randy Dunlap 提交于
Fix sparse warning for non-ANSI function declaration: arch/x86/kernel/smpboot.c:100:30: warning: non-ANSI function declaration of function 'cpu_hotplug_driver_lock' arch/x86/kernel/smpboot.c:105:32: warning: non-ANSI function declaration of function 'cpu_hotplug_driver_unlock' Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <20110108195914.95d366ea.randy.dunlap@oracle.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 07 1月, 2011 1 次提交
-
-
由 David Rientjes 提交于
"x86, numa: Fake node-to-cpumask for NUMA emulation" broke the build when CONFIG_DEBUG_PER_CPU_MAPS is set and CONFIG_NUMA_EMU is not. This is because it is possible to map a cpu to multiple nodes when NUMA emulation is used; the patch required a physical node address table to find those nodes that was only available when CONFIG_NUMA_EMU was enabled. This extracts the common debug functionality to its own function for CONFIG_DEBUG_PER_CPU_MAPS and uses it regardless of whether CONFIG_NUMA_EMU is set or not. NUMA emulation will now iterate over the set of possible nodes for each cpu and call the new debug function whereas only the cpu's node will be used without NUMA emulation enabled. Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NDavid Rientjes <rientjes@google.com> Acked-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <alpine.DEB.2.00.1012301053590.12995@chino.kir.corp.google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 1月, 2011 4 次提交
-
-
由 Huang Ying 提交于
Prevent the long delay in io_check_error making NMI watchdog timeout. Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NDon Zickus <dzickus@redhat.com> LKML-Reference: <1294198689-15447-3-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Dongdong Deng 提交于
The spin_lock_debug/rcu_cpu_stall detector uses trigger_all_cpu_backtrace() to dump cpu backtrace. Therefore it is possible that trigger_all_cpu_backtrace() could be called at the same time on different CPUs, which triggers and 'unknown reason NMI' warning. The following case illustrates the problem: CPU1 CPU2 ... CPU N trigger_all_cpu_backtrace() set "backtrace_mask" to cpu mask | generate NMI interrupts generate NMI interrupts ... \ | / \ | / The "backtrace_mask" will be cleaned by the first NMI interrupt at nmi_watchdog_tick(), then the following NMI interrupts generated by other cpus's arch_trigger_all_cpu_backtrace() will be taken as unknown reason NMI interrupts. This patch uses a test_and_set to avoid the problem, and stop the arch_trigger_all_cpu_backtrace() from calling to avoid dumping a double cpu backtrace info when there is already a trigger_all_cpu_backtrace() in progress. Signed-off-by: NDongdong Deng <dongdong.deng@windriver.com> Reviewed-by: NBruce Ashfield <bruce.ashfield@windriver.com> Cc: fweisbec@gmail.com LKML-Reference: <1294198689-15447-2-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NDon Zickus <dzickus@redhat.com>
-
由 Don Zickus 提交于
There are some paths that walk the die_chain with preemption on. Make sure we are in an NMI call before we start doing anything. This was triggered by do_general_protection calling notify_die with DIE_GPF. Reported-by: NJan Kiszka <jan.kiszka@web.de> Signed-off-by: NDon Zickus <dzickus@redhat.com> LKML-Reference: <1294198689-15447-1-git-send-email-dzickus@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Found one x2apic pre-enabled system, x2apic_mode suddenly get corrupted after register some cpus, when compiled CONFIG_NR_CPUS=255 instead of 512. It turns out that generic_processor_info() ==> phyid_set(apicid, phys_cpu_present_map) causes the problem. phys_cpu_present_map is sized by MAX_APICS bits, and pre-enabled system some cpus have an apic id > 255. The variable after phys_cpu_present_map may get corrupted silently: ffffffff828e8420 B phys_cpu_present_map ffffffff828e8440 B apic_verbosity ffffffff828e8444 B local_apic_timer_c2_ok ffffffff828e8448 B disable_apic ffffffff828e844c B x2apic_mode ffffffff828e8450 B x2apic_disabled ffffffff828e8454 B num_processors ... Actually phys_cpu_present_map is referenced via apic id, instead index. We should use MAX_LOCAL_APIC instead MAX_APICS. For 64-bit it will be 32768 in all cases. BSS will increase by 4k bytes on 64-bit: text data bss dec filename 21696943 4193748 12787712 38678403 vmlinux.before 21696943 4193748 12791808 38682499 vmlinux.after No change on 32bit. Finally we can remove MAX_APCIS that was rather confusing. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> LKML-Reference: <4D23BD9C.3070102@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 1月, 2011 5 次提交
-
-
由 Rusty Russell 提交于
v2.6.36-rc8-54-gb40827fa (x86-32, mm: Add an initial page table for core bootstrapping) made x86 boot using initial_page_table and broke lguest. For 2.6.37 we simply cut & paste the initialization code into lguest (da32dac1 "lguest: populate initial_page_table"), now we fix it properly by doing that initialization before the paravirt jump. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Cc: lguest <lguest@ozlabs.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <201101041720.54535.rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Renninger 提交于
Add these new power trace events: power:cpu_idle power:cpu_frequency power:machine_suspend The old C-state/idle accounting events: power:power_start power:power_end Have now a replacement (but we are still keeping the old tracepoints for compatibility): power:cpu_idle and power:power_frequency is replaced with: power:cpu_frequency power:machine_suspend is newly introduced. Jean Pihet has a patch integrated into the generic layer (kernel/power/suspend.c) which will make use of it. the type= field got removed from both, it was never used and the type is differed by the event type itself. perf timechart userspace tool gets adjusted in a separate patch. Signed-off-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Acked-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NJean Pihet <jean.pihet@newoldbits.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: rjw@sisk.pl LKML-Reference: <1294073445-14812-3-git-send-email-trenn@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> LKML-Reference: <1290072314-31155-2-git-send-email-trenn@suse.de>
-
由 Andrew Morton 提交于
Addresses https://bugzilla.kernel.org/show_bug.cgi?id=25702Reported-by: NMartin Ettl <ettl.martin@gmx.de> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cliff Wickman 提交于
Fix a hard-coded limit of a maximum of 16 cpu's per socket. The UV Broadcast Assist Unit code initializes by scanning the cpu topology of the system and assigning a master cpu for each socket and UV hub. That scan had an assumption of a limit of 16 cpus per socket. With Westmere we are going over that limit. The UV hub hardware will allow up to 32. If the scan finds the system has gone over that limit it returns an error and we print a warning and fall back to doing TLB shootdowns without the BAU. Signed-off-by: NCliff Wickman <cpw@sgi.com> Cc: <stable@kernel.org> # .37.x LKML-Reference: <E1PZol7-0000mM-77@eag09.americas.sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 R, Durgadoss 提交于
This patch adds code to therm_throt.c to notify core thermal threshold events. These thresholds are supported by the IA32_THERM_INTERRUPT register. The status/log for the same is monitored using the IA32_THERM_STATUS register. The necessary #defines are in msr-index.h. A call back is added to mce.h, to further notify the thermal stack, about the threshold events. Signed-off-by: NDurgadoss R <durgadoss.r@intel.com> LKML-Reference: <D6D887BA8C9DFF48B5233887EF04654105C1251710@bgsmsx502.gar.corp.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 03 1月, 2011 5 次提交
-
-
由 Aric D. Blumer 提交于
Before this patch, the following error would sometimes occur after a resume on pxa3xx: /path/to/mm/memory.c:144: bad pmd 8040542e. The problem was that a temporary page table mapping was being improperly restored. The PXA3xx resume code creates a temporary mapping of resume_turn_on_mmu to avoid a prefetch abort. The pxa3xx_resume_after_mmu code requires that the r1 register holding the address of this mapping not be modified, however, resume_turn_on_mmu does modify it. It is mostly correct in that r1 receives the base table address, but it may also get other bits in 13:0. This results in pxa3xx_resume_after_mmu restoring the original mapping to the wrong place, corrupting memory and leaving the temporary mapping in place. Signed-off-by: NMatt Reimer <mreimer@sdgsystems.com> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Mike Rapoport 提交于
The commit 6ac6b817 (ARM: pxa: encode IRQ number into .nr_irqs) removed definition of ITE_LAST_IRQ which caused the following build error: CC arch/arm/common/it8152.o arch/arm/common/it8152.c: In function 'it8152_init_irq': arch/arm/common/it8152.c:86: error: 'IT8152_LAST_IRQ' undeclared (first use in this function) arch/arm/common/it8152.c:86: error: (Each undeclared identifier is reported only once arch/arm/common/it8152.c:86: error: for each function it appears in.) make[2]: *** [arch/arm/common/it8152.o] Error 1 Defining the IT8152_LAST_IRQ in the arch/arm/include/hardware/it8152.c fixes the build. Signed-off-by: NMike Rapoport <mike@compulab.co.il> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Lennert Buytenhek 提交于
As arch/arm/mach-pxa/eseries.c references w100fb_gpio_{read,write}() directly. Signed-off-by: NLennert Buytenhek <buytenh@secretlab.ca> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Robert Richter 提交于
Disable preemption in init_ibs(). The function only checks the ibs capabilities and sets up pci devices (if necessary). It runs only on one cpu but operates with the local APIC and some MSRs, thus it is better to disable preemption. [ 7.034377] BUG: using smp_processor_id() in preemptible [00000000] code: modprobe/483 [ 7.034385] caller is setup_APIC_eilvt+0x155/0x180 [ 7.034389] Pid: 483, comm: modprobe Not tainted 2.6.37-rc1-20101110+ #1 [ 7.034392] Call Trace: [ 7.034400] [<ffffffff812a2b72>] debug_smp_processor_id+0xd2/0xf0 [ 7.034404] [<ffffffff8101e985>] setup_APIC_eilvt+0x155/0x180 [ ... ] Addresses https://bugzilla.kernel.org/show_bug.cgi?id=22812 Reported-by: <atswartz@gmail.com> Signed-off-by: NRobert Richter <robert.richter@amd.com> Cc: oprofile-list@lists.sourceforge.net <oprofile-list@lists.sourceforge.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Dan Carpenter <error27@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> [2.6.37.x] LKML-Reference: <20110103111514.GM4739@erda.amd.com> [ small cleanups ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Axel Lin 提交于
This patch fixes below build error by adding the missing asm/memory.h, which is needed for arch_is_coherent(). $ make pxa3xx_defconfig; make CC init/do_mounts_rd.o In file included from include/linux/list_bl.h:5, from include/linux/rculist_bl.h:7, from include/linux/dcache.h:7, from include/linux/fs.h:381, from init/do_mounts_rd.c:3: include/linux/bit_spinlock.h: In function 'bit_spin_unlock': include/linux/bit_spinlock.h:61: error: implicit declaration of function 'arch_is_coherent' make[1]: *** [init/do_mounts_rd.o] Error 1 make: *** [init] Error 2 Signed-off-by: NAxel Lin <axel.lin@gmail.com> Acked-by: NPeter Huewe <peterhuewe@gmx.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 1月, 2011 1 次提交
-
-
由 Avi Kivity 提交于
isr_ack is never initialized. So, until the first PIC reset, interrupts may fail to be injected. This can cause Windows XP to fail to boot, as reported in the fallout from the fix to https://bugzilla.kernel.org/show_bug.cgi?id=21962. Reported-and-tested-by: NNicolas Prochazka <prochazka.nicolas@gmail.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 29 12月, 2010 2 次提交
-
-
由 Avi Kivity 提交于
We use the physical address instead of the base gfn for the four PAE page directories we use in unpaged mode. When the guest accesses an address above 1GB that is backed by a large host page, a BUG_ON() in kvm_mmu_set_gfn() triggers. Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=21962Reported-and-tested-by: NNicolas Prochazka <prochazka.nicolas@gmail.com> KVM-Stable-Tag. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Imre Kaloz 提交于
arm: export dma_set_coherent_mask While a regression was fixed with commit 710224fa (arm: fix "arm: fix pci_set_consistent_dma_mask for dmabounce devices"), a new one was introduced as dma_set_coherent_mask wasn't exported for modules. This patch takes care for this issue. Signed-off-by: NImre Kaloz <kaloz@openwrt.org> Signed-off-by: NKrzysztof Hałasa <khc@pm.waw.pl> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 12月, 2010 1 次提交
-
-
由 Cliff Wickman 提交于
halt() should use native_halt() safe_halt() uses native_safe_halt() If CONFIG_PARAVIRT=y, halt() is defined in arch/x86/include/asm/paravirt.h as static inline void halt(void) { PVOP_VCALL0(pv_irq_ops.safe_halt); } Otherwise (no CONFIG_PARAVIRT) halt() in arch/x86/include/asm/irqflags.h is static inline void halt(void) { native_halt(); } So it looks to me like the CONFIG_PARAVIRT case of using native_safe_halt() for a halt() is an oversight. Am I missing something? It probably hasn't shown up as a problem because the local apic is disabled on a shutdown or restart. But if we disable interrupts and call halt() we shouldn't expect that the halt() will re-enable interrupts. Signed-off-by: NCliff Wickman <cpw@sgi.com> LKML-Reference: <E1PSBcz-0001g1-FM@eag09.americas.sgi.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 27 12月, 2010 1 次提交
-
-
由 Jesper Juhl 提交于
In arch/x86/kernel/microcode_intel.c::generic_load_microcode() we have this: while (leftover) { ... if (get_ucode_data(mc, ucode_ptr, mc_size) || microcode_sanity_check(mc) < 0) { vfree(mc); break; } ... } if (mc) vfree(mc); This will cause a double free of 'mc'. This patch fixes that by just removing the vfree() call in the loop since 'mc' will be freed nicely just after we break out of the loop. There's also a second change in the patch. I noticed a lot of checks for pointers being NULL before passing them to vfree(). That's completely redundant since vfree() deals gracefully with being passed a NULL pointer. Removing the redundant checks yields a nice size decrease for the object file. Size before the patch: text data bss dec hex filename 4578 240 1032 5850 16da arch/x86/kernel/microcode_intel.o Size after the patch: text data bss dec hex filename 4489 240 984 5713 1651 arch/x86/kernel/microcode_intel.o Signed-off-by: NJesper Juhl <jj@chaosbits.net> Acked-by: NTigran Aivazian <tigran@aivazian.fsnet.co.uk> Cc: Shaohua Li <shaohua.li@intel.com> LKML-Reference: <alpine.LNX.2.00.1012251946100.10759@swampdragon.chaosbits.net> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 12月, 2010 12 次提交
-
-
由 Todd Android Poynor 提交于
If the irqsoff tracer is in use, stop tracing the interrupt disable interval when returning to userspace. Tracing userspace execution time as interrupts disabled time is not helpful for kernel performance analysis purposes. Only do so if the irqsoff tracer is enabled, to avoid overhead for lockdep, which doesn't care. Signed-off-by: NTodd Poynor <toddpoynor@google.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Warren 提交于
... and also remove misleading comment stating that this header is auto-generated. Signed-off-by: NStephen Warren <swarren@nvidia.com> Acked-by: NUwe Kleine-Knig <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Paul Mundt 提交于
The master clock initialization for SH7201 was wholly bogus. Users of the legacy API must initialize the clock rate through the struct clk itself rather than returning the clock frequency. Given that the init function itself is void, returning the frequency isn't terribly effective. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
With some recent tidying of duplicate register definitions the se7206 IRQ code broke: arch/sh/boards/mach-se/7206/irq.c: error: 'INTC_ICR' undeclared (first use in this function) arch/sh/boards/mach-se/7206/irq.c: error: (Each undeclared identifier is reported only once arch/sh/boards/mach-se/7206/irq.c: error: for each function it appears in.) Fix it up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Some of the SH4-202 code was overlooked in the set_rate() API conversion, resulting in: arch/sh/kernel/cpu/sh4/clock-sh4-202.c: error: too many arguments to function 'clk->ops->set_rate' Fix it up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 David Rientjes 提交于
NUMA boot code assumes that physical node ids start at 0, but the DIMMs that the apic id represents may not be reachable. If this is the case, node 0 is never online and cpus never end up getting appropriately assigned to a node. This causes the cpumask of all online nodes to be empty and machines crash with kernel code assuming online nodes have valid cpus. The fix is to appropriately map all the address ranges for physical nodes and ensure the cpu to node mapping function checks all possible nodes (up to MAX_NUMNODES) instead of simply checking nodes 0-N, where N is the number of physical nodes, for valid address ranges. This requires no longer "compressing" the address ranges of nodes in the physical node map from 0-N, but rather leave indices in physnodes[] to represent the actual node id of the physical node. Accordingly, the topology exported by both amd_get_nodes() and acpi_get_nodes() no longer must return the number of nodes to iterate through; all such iterations will now be to MAX_NUMNODES. This change also passes the end address of system RAM (which may be different from normal operation if mem= is specified on the command line) before the physnodes[] array is populated. ACPI parsed nodes are truncated to fit within the address range that respect the mem= boundaries and even some physical nodes may become unreachable in such cases. When NUMA emulation does succeed, any apicid to node mapping that exists for unreachable nodes are given default values so that proximity domains can still be assigned. This is important for node_distance() to function as desired. Signed-off-by: NDavid Rientjes <rientjes@google.com> LKML-Reference: <alpine.DEB.2.00.1012221702090.3701@chino.kir.corp.google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Rientjes 提交于
It's necessary to fake the node-to-cpumask mapping so that an emulated node ID returns a cpumask that includes all cpus that have affinity to the memory it represents. This is a little intrusive because it requires knowledge of the physical topology of the system. setup_physnodes() gives us that information, but since NUMA emulation ends up altering the physnodes array, it's necessary to reset it before cpus are brought online. Accordingly, the physnodes array is moved out of init.data and into cpuinit.data since it will be needed on cpuup callbacks. This works regardless of whether numa=fake is used on the command line, or the setup of the fake node succeeds or fails. The physnodes array always contains the physical topology of the machine if CONFIG_NUMA_EMU is enabled and can be used to setup the correct node-to-cpumask mappings in all cases since setup_physnodes() is called whenever the array needs to be repopulated with the correct data. To fake the actual mappings, numa_add_cpu() and numa_remove_cpu() are rewritten for CONFIG_NUMA_EMU so that we first find the physical node to which each cpu has local affinity, then iterate through all online nodes to find the emulated nodes that have local affinity to that physical node, and then finally map the cpu to each of those emulated nodes. Signed-off-by: NDavid Rientjes <rientjes@google.com> LKML-Reference: <alpine.DEB.2.00.1012221701520.3701@chino.kir.corp.google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Rientjes 提交于
This patch adds the equivalent of acpi_fake_nodes() for AMD Northbridge platforms. The goal is to fake the apicid-to-node mappings for NUMA emulation so the physical topology of the machine is correctly maintained within the kernel. This change also fakes proximity domains for both ACPI and k8 code so the physical distance between emulated nodes is maintained via node_distance(). This exports the correct distances via /sys/devices/system/node/.../distance based on the underlying topology. A new helper function, fake_physnodes(), is introduced to correctly invoke the correct NUMA code to fake these two mappings based on the system type. If there is no underlying NUMA configuration, all cpus are mapped to node 0 for local distance. Since acpi_fake_nodes() is no longer called with CONFIG_ACPI_NUMA, it's prototype can be removed from the header file for such a configuration. Signed-off-by: NDavid Rientjes <rientjes@google.com> LKML-Reference: <alpine.DEB.2.00.1012221701360.3701@chino.kir.corp.google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Rientjes 提交于
Both acpi_get_nodes() and amd_get_nodes() are only necessary when CONFIG_NUMA_EMU is enabled, so avoid compiling them when the option is disabled. Signed-off-by: NDavid Rientjes <rientjes@google.com> LKML-Reference: <alpine.DEB.2.00.1012221701210.3701@chino.kir.corp.google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Rientjes 提交于
This patch changes the minimum fake node size from 64MB to 32MB so it is possible to test NUMA code at a greater scale on smaller machines (64 nodes on a 2G machine, 1024 nodes on 32G machine with CONFIG_NODES_SHIFT=10). Signed-off-by: NDavid Rientjes <rientjes@google.com> LKML-Reference: <alpine.DEB.2.00.1012221700590.3701@chino.kir.corp.google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
Recent Intel new system have different order in MADT, aka will list all thread0 at first, then all thread1. But SRAT table still old order, it will list cpus in one socket all together. If the user have compiled limited NR_CPUS or boot with nr_cpus=, could have missed to put some cpus apic id to node mapping into apicid_to_node[]. for example for 4 sockets system with 64 cpus with nr_cpus=32 will get crash... [ 9.106288] Total of 32 processors activated (136190.88 BogoMIPS). [ 9.235021] divide error: 0000 [#1] SMP [ 9.235315] last sysfs file: [ 9.235481] CPU 1 [ 9.235592] Modules linked in: [ 9.245398] [ 9.245478] Pid: 2, comm: kthreadd Not tainted 2.6.37-rc1-tip-yh-01782-ge92ef79-dirty #274 /Sun Fire x4800 [ 9.265415] RIP: 0010:[<ffffffff81075a8f>] [<ffffffff81075a8f>] select_task_rq_fair+0x4f0/0x623 ... [ 9.645938] RIP [<ffffffff81075a8f>] select_task_rq_fair+0x4f0/0x623 [ 9.665356] RSP <ffff88103f8d1c40> [ 9.665568] ---[ end trace 2296156d35fdfc87 ]--- So let just parse all cpu entries in SRAT. Also add apicid checking with MAX_LOCAL_APIC, in case We could out of boundaries of apicid_to_node[]. it fixes following bug too. https://bugzilla.kernel.org/show_bug.cgi?id=22662 -v2: expand to 32bit according to hpa need to add MAX_LOCAL_APIC for 32bit Reported-and-Tested-by: NWu Fengguang <fengguang.wu@intel.com> Reported-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Tested-by: NMyron Stowe <myron.stowe@hp.com> Signed-off-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <4D0AD486.9020704@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Yinghai Lu 提交于
We should use MAX_LOCAL_APIC for max apic ids and MAX_APICS as number of local apics. Also apic_version[] array should use MAX_LOCAL_APICs. Signed-off-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <4D0AD464.2020408@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-