- 28 1月, 2011 12 次提交
-
-
由 Tejun Heo 提交于
apic->apicid_to_node() is 32bit specific apic operation which determines NUMA node for a CPU. Depending on the APIC implementation, it can be easier to determine NUMA node from either physical or logical apicid. Currently, ->apicid_to_node() takes @logical_apicid and calls hard_smp_processor_id() if the physical apicid is needed. This prevents NUMA mapping from being queried from a different CPU, which in turn makes it impossible to initialize NUMA mapping before SMP bringup. This patch replaces apic->apicid_to_node() with ->x86_32_numa_cpu_node() which takes @cpu, from which both logical and physical apicids can easily be determined. While at it, drop duplicate implementations from bigsmp_32 and summit_32, and use the default one. Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-13-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-12-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Factor out logical apic id calculation from summit_init_apic_ldr() and use it for the x86_32_early_logical_apicid() callback. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-11-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-10-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Implement x86_32_early_logical_apicid() for the default apic flat routing. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-9-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
On x86_32, the mapping between cpu and logical apic ID differs depending on the specific apic implementation in use. The mapping is initialized while bringing up CPUs; however, this makes early inits ignore memory topology. Add a x86_32 specific apic->x86_32_early_logical_apicid() which is called early during boot to query the mapping. The mapping is later verified against the result of init_apic_ldr(). The method is allowed to return BAD_APICID if it can't be determined early. noop variant which always returns BAD_APICID is implemented and added to all x86_32 apic implementations. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-8-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
After the previous patch, apic->cpu_to_logical_apicid() is no longer used. Kill it. For apic types with custom cpu_to_logical_apicid() which is also used for other purposes, remove the function and modify its users to do the mapping directly. #ifdef's on CONFIG_SMP in es7000_32 and summit_32 are ignored during conversion as they are not used for UP kernels. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-7-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Currently, cpu -> logical apic id translation is done by apic->cpu_to_logical_apicid() callback which may or may not use x86_cpu_to_logical_apicid. This is unnecessary as it should always equal logical_smp_processor_id() which is known early during CPU bring up. Initialize x86_cpu_to_logical_apicid after apic->init_apic_ldr() in setup_local_APIC() and always use x86_cpu_to_logical_apicid for cpu -> logical apic id mapping. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-6-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Unlike x86_64, on x86_32, the mapping from cpu to logical apicid may vary depending on apic in use. cpu_2_logical_apicid[] array is used for this mapping. Replace it with early percpu variable x86_cpu_to_logical_apicid to make it better aligned with other mappings. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: penberg@kernel.org Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-5-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Both functions are used only in 32bit. Put them inside CONFIG_X86_32. This is to prepare for logical apicid handling update. - Cyrill Gorcunov spotted that I forgot to move declarations in ipi.h under CONFIG_X86_32. Fixed. Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Reviewed-by: NCyrill Gorcunov <gorcunov@gmail.com> Acked-by: NYinghai Lu <yinghai@kernel.org> Cc: eric.dumazet@gmail.com Cc: brgerst@gmail.com Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-4-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Commit 56d91f13 (x86, acpi: Add MAX_LOCAL_APIC for 32bit) added MAX_LOCAL_APIC for x86_32 but didn't replace MAX_APICID users with it. Convert MAX_APICID users to MAX_LOCAL_APIC and drop MAX_APICID. Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Acked-by: NYinghai Lu <yinghai@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-3-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Tejun Heo 提交于
Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NPekka Enberg <penberg@kernel.org> Acked-by: NYinghai Lu <yinghai@kernel.org> Cc: eric.dumazet@gmail.com Cc: yinghai@kernel.org Cc: brgerst@gmail.com Cc: gorcunov@gmail.com Cc: shaohui.zheng@intel.com Cc: rientjes@google.com LKML-Reference: <1295789862-25482-2-git-send-email-tj@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 1月, 2011 1 次提交
-
-
由 Jesper Juhl 提交于
In arch/x86/kernel/dumpstack_64.c::dump_trace() we have this code: ... if (!stack) { unsigned long dummy; stack = &dummy; if (task && task != current) stack = (unsigned long *)task->thread.sp; } bp = stack_frame(task, regs); /* * Print function call entries in all stacks, starting at the * current stack address. If the stacks consist of nested * exceptions */ tinfo = task_thread_info(task); for (;;) { char *id; unsigned long *estack_end; estack_end = in_exception_stack(cpu, (unsigned long)stack, &used, &id); ... You'll notice that we assign to 'stack' the address of the variable 'dummy' which is only in-scope inside the 'if (!stack)'. So when we later access stack (at the end of the above, and assuming we did not take the 'if (task && task != current)' branch) we'll be using the address of a variable that is no longer in scope. I believe this patch is the proper fix, but I freely admit that I'm not 100% certain. Signed-off-by: NJesper Juhl <jj@chaosbits.net> LKML-Reference: <alpine.LNX.2.00.1101242232590.10252@swampdragon.chaosbits.net> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 22 1月, 2011 1 次提交
-
-
由 Borislav Petkov 提交于
ea530692 made a CPU use monitor/mwait when offline. This is not the optimal choice for AMD wrt to powersavings and we'd prefer our cores to halt (i.e. enter C1) instead. For this, the same selection whether to use monitor/mwait has to be used as when we select the idle routine for the machine. With this patch, offlining cores 1-5 on a X6 machine allows core0 to boost again. [ hpa: putting this in urgent since it is a (power) regression fix ] Reported-by: NAndreas Herrmann <andreas.herrmann3@amd.com> Cc: stable@kernel.org # 37.x Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Len Brown <lenb@kernel.org> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.hl> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <1295534572-10730-1-git-send-email-bp@amd64.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 21 1月, 2011 1 次提交
-
-
由 Fenghua Yu 提交于
In therm_throt.c, commit 9e76a97e patch doesn't export the symbol platform_thermal_notify. Other drivers (e.g. drivers/hwmon/coretemp.c) can not find the symbol platform_thermal_notify when defining threshould interrupt handler. Please apply this patch to allow threshold interrupt handler in coretemp. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Cc: R Durgadoss <durgadoss.r@intel.com> Cc: khali@linux-fr.org <khali@linux-fr.org> Cc: lm-sensors@lm-sensors.org <lm-sensors@lm-sensors.org> Cc: Guenter Roeck <guenter.roeck@ericsson.com> LKML-Reference: <20110121041239.GB26954@linux-os.sc.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 1月, 2011 1 次提交
-
-
由 Dave Jones 提交于
Update to latest definitions in: http://www.intel.com/Assets/PDF/appnote/241618.pdf [ Note, this update of the doc has removed some old values which we have listed. I think until we have clarification that they were never used in production, they should be left there. ] Signed-off-by: NDave Jones <davej@redhat.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> LKML-Reference: <20110120012055.GA15985@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 19 1月, 2011 1 次提交
-
-
由 Ingo Molnar 提交于
This reverts commit 86b1e8dd ("x86: Make relocatable kernel work with new binutils"). Markus Trippelsdorf reported a boot failure caused by this patch. The real solution to the original patch will likely involve an arch-generic solution to define an overlaid jiffies_64 and jiffies variables. Until that's done and tested on all architectures revert this commit to solve the regression. Reported-and-bisected-by: NMarkus Trippelsdorf <markus@trippelsdorf.de> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Cc: Shaohua Li <shaohua.li@intel.com> Cc: "Lu, Hongjiu" <hongjiu.lu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org>, Cc: Sam Ravnborg <sam@ravnborg.org> LKML-Reference: <4D36A759.60704@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 1月, 2011 2 次提交
-
-
由 Brian Gerst 提交于
Mathias Merz reported that v2.6.37 failed to boot on his system. Make sure that the thread_info part of the irqstack is initialized to zeroes. Reported-and-Tested-by: NMatthias Merz <linux@merz-ka.de> Signed-off-by: NBrian Gerst <brgerst@gmail.com> Acked-by: NPekka Enberg <penberg@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <AANLkTimyKXfJ1x8tgwrr1hYnNLrPfgE1NTe4z7L6tUDm@mail.gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Shaohua Li 提交于
The CONFIG_RELOCATABLE=y option is broken with new binutils, which will make boot panic. According to Lu Hongjiu, the affected binutils are from 2.20.51.0.12 to 2.21.51.0.3, which are release since Oct 22 this year. At least ubuntu 10.10 is using such binutils. See: http://sourceware.org/bugzilla/show_bug.cgi?id=12327 The reason of the boot panic is that we have 'jiffies = jiffies_64;' in vmlinux.lds.S. The jiffies isn't in any section. In kernel build, there is warning saying jiffies is an absolute address and can't be relocatable. At runtime, jiffies will have virtual address 0. Signed-off-by: Shaohua Li<shaohua.li@intel.com> Cc: Lu Hongjiu<hongjiu.lu@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Sam Ravnborg <sam@ravnborg.org> LKML-Reference: <1295312269.1949.725.camel@sli10-conroe> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 1月, 2011 2 次提交
-
-
由 Jacob Pan 提交于
Offlining the secondary CPU causes the timer irq affinity to be set to CPU 0. When the secondary CPU is back online again, the wrong irq affinity will be used. This patch ensures secondary per CPU timer always has the correct IRQ affinity when enabled. Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> LKML-Reference: <1294963604-18111-1-git-send-email-jacob.jun.pan@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org> 2.6.37
-
由 John Stultz 提交于
Konrad Wilk reported that the new delayed calibration crashes with a divide by zero on Xen. The reason is that Xen sets the pmtimer address, but reading from it returns 0xffffff. That results in the ref_start and ref_stop value being the same, so the delta is zero which causes the divide by zero later in the calculation. The conditional (!hpet && !ref_start && !ref_stop) which sanity checks the calibration reference values doesn't really make sense. If the refs are null, but hpet is on, we still want to break out. The div by zero would be possible to trigger by chance if both reads from the hardware provided the exact same value (due to hardware wrapping). So checking if both the ref values are the same should handle if we don't have hardware (both null) or if they are the same value (either by invalid hardware, or by chance), avoiding the div by zero issue. [ tglx: Applied the same fix to native_calibrate_tsc() where this check was copied from ] Reported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NJohn Stultz <johnstul@us.ibm.com> LKML-Reference: <1295024788-15619-1-git-send-email-johnstul@us.ibm.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 14 1月, 2011 5 次提交
-
-
由 Andrea Arcangeli 提交于
split_huge_page_pmd compat code. Each one of those would need to be expanded to hundred of lines of complex code without a fully reliable split_huge_page_pmd design. Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
pte alloc routines must wait for split_huge_page if the pmd is not present and not null (i.e. pmd_trans_splitting). The additional branches are optimized away at compile time by pmd_trans_splitting if the config option is off. However we must pass the vma down in order to know the anon_vma lock to wait for. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Paravirt ops pmd_update/pmd_update_defer/pmd_set_at. Not all might be necessary (vmware needs pmd_update, Xen needs set_pmd_at, nobody needs pmd_update_defer), but this is to keep full simmetry with pte paravirt ops, which looks cleaner and simpler from a common code POV. Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Acked-by: NRik van Riel <riel@redhat.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for module_init(). Much of the code is duplicated and can be generalized in a globally accessible function, __vmalloc_node_range(). __vmalloc_node() now calls into __vmalloc_node_range() with a range of [VMALLOC_START, VMALLOC_END) for functionally equivalent behavior. Each architecture may then use __vmalloc_node_range() directly to remove the duplication of code. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Stephen Hemminger 提交于
Occasionally the system gets into a state where the CMOS clock has gotten slightly ahead of current time and the periodic update of RTC fails. The message is a nuisance and repeats spamming the log. See: http://www.ntp.org/ntpfaq/NTP-s-trbl-spec.htm#Q-LINUX-SET-RTC-MMSS Rather than just removing the message, make it show only once and reduce severity since it indicates a normal and non urgent condition. Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 1月, 2011 2 次提交
-
-
由 Thomas Renninger 提交于
cpuidle/x86/perf: fix power:cpu_idle double end events and throw cpu_idle events from the cpuidle layer Currently intel_idle and acpi_idle driver show double cpu_idle "exit idle" events -> this patch fixes it and makes cpu_idle events throwing less complex. It also introduces cpu_idle events for all architectures which use the cpuidle subsystem, namely: - arch/arm/mach-at91/cpuidle.c - arch/arm/mach-davinci/cpuidle.c - arch/arm/mach-kirkwood/cpuidle.c - arch/arm/mach-omap2/cpuidle34xx.c - arch/drivers/acpi/processor_idle.c (for all cases, not only mwait) - arch/x86/kernel/process.c (did throw events before, but was a mess) - drivers/idle/intel_idle.c (did throw events before) Convention should be: Fire cpu_idle events inside the current pm_idle function (not somewhere down the the callee tree) to keep things easy. Current possible pm_idle functions in X86: c1e_idle, poll_idle, cpuidle_idle_call, mwait_idle, default_idle -> this is really easy is now. This affects userspace: The type field of the cpu_idle power event can now direclty get mapped to: /sys/devices/system/cpu/cpuX/cpuidle/stateX/{name,desc,usage,time,...} instead of throwing very CPU/mwait specific values. This change is not visible for the intel_idle driver. For the acpi_idle driver it should only be visible if the vendor misses out C-states in his BIOS. Another (perf timechart) patch reads out cpuidle info of cpu_idle events from: /sys/.../cpuidle/stateX/*, then the cpuidle events are mapped to the correct C-/cpuidle state again, even if e.g. vendors miss out C-states in their BIOS and for example only export C1 and C3. -> everything is fine. Signed-off-by: NThomas Renninger <trenn@suse.de> CC: Robert Schoene <robert.schoene@tu-dresden.de> CC: Jean Pihet <j-pihet@ti.com> CC: Arjan van de Ven <arjan@linux.intel.com> CC: Ingo Molnar <mingo@elte.hu> CC: Frederic Weisbecker <fweisbec@gmail.com> CC: linux-pm@lists.linux-foundation.org CC: linux-acpi@vger.kernel.org CC: linux-kernel@vger.kernel.org CC: linux-perf-users@vger.kernel.org CC: linux-omap@vger.kernel.org Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Thomas Renninger 提交于
Having four variables for the same thing: idle_halt, idle_nomwait, force_mwait and boot_option_idle_overrides is rather confusing and unnecessary complex. if idle= boot param is passed, only set up one variable: boot_option_idle_overrides Introduces following functional changes/fixes: - intel_idle driver does not register if any idle=xy boot param is passed. - processor_idle.c will also not register a cpuidle driver and get active if idle=halt is passed. Before a cpuidle driver with one (C1, halt) state got registered Now the default_idle function will be used which finally uses the same idle call to enter sleep state (safe_halt()), but without registering a whole cpuidle driver. That means idle= param will always avoid cpuidle drivers to register with one exception (same behavior as before): idle=nomwait may still register acpi_idle cpuidle driver, but C1 will not use mwait, but hlt. This can be a workaround for IO based deeper sleep states where C1 mwait causes problems. Signed-off-by: NThomas Renninger <trenn@suse.de> cc: x86@kernel.org Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 12 1月, 2011 8 次提交
-
-
由 Avi Kivity 提交于
init_fpu() (which is indirectly called by the fpu switching code) assumes it is in process context. Rather than makeing init_fpu() use an atomic allocation, which can cause a task to be killed, make sure the fpu is already initialized when we enter the run loop. KVM-Stable-Tag. Reported-and-tested-by: NKirill A. Shutemov <kas@openvz.org> Acked-by: NPekka Enberg <penberg@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
If guest can detect that it runs in non-preemptable context it can handle async PFs at any time, so let host know that it can send async PF even if guest cpu is not in userspace. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
If async page fault is received by idle task or when preemp_count is not zero guest cannot reschedule, so do sti; hlt and wait for page to be ready. vcpu can still process interrupts while it waits for the page to be ready. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
When async PF capability is detected hook up special page fault handler that will handle async page fault events and bypass other page faults to regular page fault handler. Also add async PF handling to nested SVM emulation. Async PF always generates exit to L1 where vcpu thread will be scheduled out until page is available. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Enable async PF in a guest if async PF capability is discovered. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Async PF also needs to hook into smp_prepare_boot_cpu so move the hook into generic code. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Huang Ying 提交于
Generic Hardware Error Source provides a way to report platform hardware errors (such as that from chipset). It works in so called "Firmware First" mode, that is, hardware errors are reported to firmware firstly, then reported to Linux by firmware. This way, some non-standard hardware error registers or non-standard hardware link can be checked by firmware to produce more valuable hardware error information for Linux. This patch adds POLL/IRQ/NMI notification types support. Because the memory area used to transfer hardware error information from BIOS to Linux can be determined only in NMI, IRQ or timer handler, but general ioremap can not be used in atomic context, so a special version of atomic ioremap is implemented for that. Known issue: - Error information can not be printed for recoverable errors notified via NMI, because printk is not NMI-safe. Will fix this via delay printing to IRQ context via irq_work or make printk NMI-safe. v2: - adjust printk format per comments. Signed-off-by: NHuang Ying <ying.huang@intel.com> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 11 1月, 2011 3 次提交
-
-
由 Jack Steiner 提交于
Westmere processors use a different algorithm for assigning APICIDs on SGI UV systems. The location of the node number within the apicid is now a function of the processor type. Signed-off-by: NJack Steiner <steiner@sgi.com> LKML-Reference: <20110110195210.GA18737@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jan Beulich 提交于
While both methods should work equivalently well for the native case, the Xen Dom0 case can't reliably work with the MSR one, since there's no guarantee that the virtual CPUs it has available fully cover all necessary physical ones. As per the suggestion of Robert Richter the patch only adds the PCI method, but leaves the MSR one as a fallback to cover new systems the PCI IDs of which may not have got added to the code base yet. The only change in v2 is the breaking out of the new CPI initialization method into a separate function, as requested by Ingo. Signed-off-by: NJan Beulich <jbeulich@novell.com> Acked-by: NRobert Richter <robert.richter@amd.com> Cc: Andreas Herrmann3 <Andreas.Herrmann3@amd.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> LKML-Reference: <4D2B3FD7020000780002B67D@vpn.id2.novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Thomas Gleixner 提交于
commit a8760eca (x86: Check tsc available/disabled in the delayed init function) missed to prevent the setup of the delayed init function in case the initial tsc calibration failed. This results in the same divide by zero bug as we have seen without the tsc disabled check. Skip the delayed work setup when tsc_khz (the initial calibration value) is 0. Bisected-and-tested-by: NKirill A. Shutemov <kas@openvz.org> Cc: John Stultz <john.stultz@linaro.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 1月, 2011 1 次提交
-
-
由 Pierre Tardy 提交于
Latest atom socs(penwell) does not have hpet timer. As their local APIC timer is clocked at 400KHZ, and the current code limit their Initial Counter register to 23 bits, they cannot sleep more than 1.34 seconds which leads to ~2 spurious wakeup per second (1 per thread) These SOCs support 32bit timer so we change the max_delta to at least 31bits. So we can at least sleep for 300 seconds. We could not find any previous chip errata where lapic would only have 23 bit precision As powertop is suggesting to activate HPET to "sleep longer", this could mean this problem is already known. Problem is here since very first implementation of lapic timer as a clock event e9e2cdb4 [PATCH] clockevents: i386 drivers. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NPierre Tardy <pierre.tardy@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Adrian Bunk <bunk@stusta.de> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: john stultz <johnstul@us.ibm.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Andi Kleen <ak@suse.de> LKML-Reference: <1294327409-19426-1-git-send-email-pierre.tardy@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-