- 20 5月, 2011 12 次提交
-
-
由 Suresh Siddha 提交于
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: steiner@sgi.com Cc: yinghai@kernel.org Link: http://lkml.kernel.org/r/20110519234637.337024125@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Use the unused probe routine in the apic driver to finalize the apic model selection. This cleans up the default_setup_apic_routing() and this probe routine in future can also be used for doing any apic model specific initialisation. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: steiner@sgi.com Cc: yinghai@kernel.org Link: http://lkml.kernel.org/r/20110519234637.247458931@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: daniel.blueman@gmail.com Link: http://lkml.kernel.org/r/20110518233158.089978277@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: daniel.blueman@gmail.com Link: http://lkml.kernel.org/r/20110518233157.994002011@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: daniel.blueman@gmail.com Link: http://lkml.kernel.org/r/20110518233157.909013179@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: daniel.blueman@gmail.com Link: http://lkml.kernel.org/r/20110518233157.830697056@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Introduce struct ioapic with nr_registers field. This will pave way for consolidating different MAX_IO_APICS arrays into it. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: daniel.blueman@gmail.com Link: http://lkml.kernel.org/r/20110518233157.744315519@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Code flow for enabling interrupt-remapping has its own routines for saving and restoring io-apic RTE's. ioapic suspend/resume code flow also has similar routines. Remove the duplicate code. Tested-by: NDaniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20110518233157.673130611@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Code flow for enabling interrupt-remapping was allocating/freeing buffers for saving/restoring io-apic RTE's. ioapic suspend/resume code uses boot time allocated ioapic_saved_data that is a perfect match for reuse here. This will remove the unnecessary allocation/free of the temporary buffers during suspend/resume of interrupt-remapping enabled platforms aswell as paving the way for further code consolidation. Tested-by: NDaniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20110518233157.574469296@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
This allows re-using this buffer for enabling interrupt-remapping during boot and resume. And thus allow for consolidating the code between ioapic suspend/resume and interrupt-remapping. Tested-by: NDaniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20110518233157.481404505@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Daniel J Blueman 提交于
Fix a potential deadlock when resuming; here the calling function has disabled interrupts, so we cannot sleep. Change the memory allocation flag from GFP_KERNEL to GFP_ATOMIC. TODO: We can do away with this memory allocation during resume by reusing the ioapic suspend/resume code that uses boot time allocated buffers, but we want to keep this -stable patch simple. Signed-off-by: NDaniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: <stable@kernel.org> # v2.6.38/39 Link: http://lkml.kernel.org/r/20110518233157.385970138@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Dave Jones 提交于
Signed-off-by: NDave Jones <davej@redhat.com>
-
- 19 5月, 2011 8 次提交
-
-
由 Daniel Kiper 提交于
Cleanup code/data sections definitions accordingly to include/linux/init.h. Signed-off-by: NDaniel Kiper <dkiper@net-space.pl> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Daniel Kiper 提交于
Cleanup code/data sections definitions accordingly to include/linux/init.h. Signed-off-by: NDaniel Kiper <dkiper@net-space.pl> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Daniel Kiper 提交于
Cleanup code/data sections definitions accordingly to include/linux/init.h. Signed-off-by: NDaniel Kiper <dkiper@net-space.pl> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Daniel Kiper 提交于
Cleanup code/data sections definitions accordingly to include/linux/init.h. Signed-off-by: NDaniel Kiper <dkiper@net-space.pl> [v1: Rebased on top of latest linus's to include fixes in mmu.c] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Tian, Kevin 提交于
It doesn't make sense to unconditionally unmask a disabled irq when migrating it from offlined cpu to another. If the irq triggers then it will be disabled in the interrupt handler anyway. So we can just avoid unmasking it. [ tglx: Made masking unconditional again and fixed the changelog ] Signed-off-by: NFengzhe Zhang <fengzhe.zhang@intel.com> Signed-off-by: NKevin Tian <kevin.tian@intel.com> Cc: Ian Campbell <Ian.Campbell@citrix.com> Cc: Jan Beulich <JBeulich@novell.com> Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/%3C625BA99ED14B2D499DC4E29D8138F1505C8ED7F7E3%40shsmsx502.ccr.corp.intel.com%3ESigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Tian, Kevin 提交于
IRQF_PER_CPU means that the irq cannot be moved away from a given cpu. So it must not be migrated when the cpu goes offline. [ tglx: massaged changelog ] Signed-off-by: NFengzhe Zhang <fengzhe.zhang@intel.com> Signed-off-by: NKevin Tian <kevin.tian@intel.com> Cc: Ian Campbell <Ian.Campbell@citrix.com> Cc: Jan Beulich <JBeulich@novell.com> Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com> Link: http://lkml.kernel.org/r/%3C625BA99ED14B2D499DC4E29D8138F1505C8ED7F7E2%40shsmsx502.ccr.corp.intel.com%3ESigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
No need to recalculate the frequency and the conversion factors over and over. Calculate the frequency once and use the new config/register interface and let the core code do the math. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by: NIngo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.646482357%40linutronix.de%3E
-
由 Thomas Gleixner 提交于
Let the core do the work. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Reviewed-by: NIngo Molnar <mingo@elte.hu> Link: http://lkml.kernel.org/r/%3C20110518210136.545615675%40linutronix.de%3E
-
- 18 5月, 2011 15 次提交
-
-
由 Jiri Olsa 提交于
As reported in BZ #30352: https://bugzilla.kernel.org/show_bug.cgi?id=30352 there's a kernel bug related to reading the last allowed page on x86_64. The _copy_to_user() and _copy_from_user() functions use the following check for address limit: if (buf + size >= limit) fail(); while it should be more permissive: if (buf + size > limit) fail(); That's because the size represents the number of bytes being read/write from/to buf address AND including the buf address. So the copy function will actually never touch the limit address even if "buf + size == limit". Following program fails to use the last page as buffer due to the wrong limit check: #include <sys/mman.h> #include <sys/socket.h> #include <assert.h> #define PAGE_SIZE (4096) #define LAST_PAGE ((void*)(0x7fffffffe000)) int main() { int fds[2], err; void * ptr = mmap(LAST_PAGE, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); assert(ptr == LAST_PAGE); err = socketpair(AF_LOCAL, SOCK_STREAM, 0, fds); assert(err == 0); err = send(fds[0], ptr, PAGE_SIZE, 0); perror("send"); assert(err == PAGE_SIZE); err = recv(fds[1], ptr, PAGE_SIZE, MSG_WAITALL); perror("recv"); assert(err == PAGE_SIZE); return 0; } The other place checking the addr limit is the access_ok() function, which is working properly. There's just a misleading comment for the __range_not_ok() macro - which this patch fixes as well. The last page of the user-space address range is a guard page and Brian Gerst observed that the guard page itself due to an erratum on K8 cpus (#121 Sequential Execution Across Non-Canonical Boundary Causes Processor Hang). However, the test code is using the last valid page before the guard page. The bug is that the last byte before the guard page can't be read because of the off-by-one error. The guard page is left in place. This bug would normally not show up because the last page is part of the process stack and never accessed via syscalls. Signed-off-by: NJiri Olsa <jolsa@redhat.com> Acked-by: NBrian Gerst <brgerst@gmail.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/1305210630-7136-1-git-send-email-jolsa@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Fenghua Yu 提交于
Enable/disable newly documented SMEP (Supervisor Mode Execution Protection) CPU feature in kernel. CR4.SMEP (bit 20) is 0 at power-on. If the feature is supported by CPU (X86_FEATURE_SMEP), enable SMEP by setting CR4.SMEP. New kernel option nosmep disables the feature even if the feature is supported by CPU. [ hpa: moved the call to setup_smep() until after the vendor-specific initialization; that ensures that CPUID features are unmasked. We will still run it before we have userspace (never mind uncontrolled userspace). ] Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305157865-31727-1-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Add support for newly documented SMEP (Supervisor Mode Execution Protection) CPU feature in CR4. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305683069-25394-3-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Add support for newly documented SMEP (Supervisor Mode Execution Protection) CPU feature flag. SMEP prevents the CPU in kernel-mode to jump to an executable page that has the user flag set in the PTE. This prevents the kernel from executing user-space code accidentally or maliciously, so it for example prevents kernel exploits from jumping to specially prepared user-mode shell code. [ hpa: added better description by Ingo Molnar ] Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305683069-25394-2-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Support memset() with enhanced rep stosb. On processors supporting enhanced REP MOVSB/STOSB, the alternative memset_c_e function using enhanced rep stosb overrides the fast string alternative memset_c and the original function. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-10-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Support memmove() by enhanced rep movsb. On processors supporting enhanced REP MOVSB/STOSB, the alternative memmove() function using enhanced rep movsb overrides the original function. The patch doesn't change the backward memmove case to use enhanced rep movsb. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-9-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Support memcpy() with enhanced rep movsb. On processors supporting enhanced rep movsb, the alternative memcpy() function using enhanced rep movsb overrides the original function and the fast string function. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-8-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Support copy_to_user/copy_from_user() by enhanced REP MOVSB/STOSB. On processors supporting enhanced REP MOVSB/STOSB, the alternative copy_user_enhanced_fast_string function using enhanced rep movsb overrides the original function and the fast string function. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-7-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Intel processors are adding enhancements to REP MOVSB/STOSB and the use of REP MOVSB/STOSB for optimal memcpy/memset or similar functions is recommended. Enhancement availability is indicated by CPUID.7.0.EBX[9] (Enhanced REP MOVSB/ STOSB). Support clear_page() with rep stosb for processor supporting enhanced REP MOVSB /STOSB. On processors supporting enhanced REP MOVSB/STOSB, the alternative clear_page_c_e function using enhanced REP STOSB overrides the original function and the fast string function. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-6-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Add altinstruction_entry macro to generate .altinstructions section entries from assembly code. This should be less failure-prone than open-coding. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-5-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Some string operation functions may be patched twice, e.g. on enhanced REP MOVSB /STOSB processors, memcpy is patched first by fast string alternative function, then it is patched by enhanced REP MOVSB/STOSB alternative function. Add comment for applying alternatives order to warn people who may change the applying alternatives order for any reason. [ Documentation-only patch ] Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-4-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
If kernel intends to use enhanced REP MOVSB/STOSB, it must ensure IA32_MISC_ENABLE.Fast_String_Enable (bit 0) is set and CPUID.(EAX=07H, ECX=0H): EBX[bit 9] also reports 1. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Fenghua Yu 提交于
Intel processors are adding enhancements to REP MOVSB/STOSB and the use of REP MOVSB/STOSB for optimal memcpy/memset or similar functions is recommended. Enhancement availability is indicated by CPUID.7.0.EBX[9] (Enhanced REP MOVSB/ STOSB). Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305671358-14478-2-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Amerigo Wang 提交于
acpi_sleep=s4_nonvs is superseded by acpi_sleep=nonvs, so remove it. Signed-off-by: NWANG Cong <amwang@redhat.com> Acked-by: NPavel Machek <pavel@ucw.cz> Acked-by: NLen Brown <lenb@kernel.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Fenghua Yu 提交于
CPUID leaf 7, subleaf 0 returns the maximum subleaf in EAX, not the number of subleaves. Since so far only subleaf 0 is defined (and only the EBX bitfield) we do not need to qualify the test. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1305660806-17519-1-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org> 2.6.36..39
-
- 17 5月, 2011 5 次提交
-
-
由 Borislav Petkov 提交于
Trying to enable the local APIC timer on early K8 revisions uncovers a number of other issues with it, in conjunction with the C1E enter path on AMD. Fixing those causes much more churn and troubles than the benefit of using that timer brings so don't enable it on K8 at all, falling back to the original functionality the kernel had wrt to that. Reported-and-bisected-by: NNick Bowler <nbowler@elliptictech.com> Cc: Boris Ostrovsky <Boris.Ostrovsky@amd.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Greg Kroah-Hartman <greg@kroah.com> Cc: Hans Rosenfeld <hans.rosenfeld@amd.com> Cc: Nick Bowler <nbowler@elliptictech.com> Cc: Joerg-Volker-Peetz <jvpeetz@web.de> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1305636919-31165-3-git-send-email-bp@amd64.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Borislav Petkov 提交于
This reverts commit e20a2d20, as it crashes certain boxes with specific AMD CPU models. Moving the lower endpoint of the Erratum 400 check to accomodate earlier K8 revisions (A-E) opens a can of worms which is simply not worth to fix properly by tweaking the errata checking framework: * missing IntPenging MSR on revisions < CG cause #GP: http://marc.info/?l=linux-kernel&m=130541471818831 * makes earlier revisions use the LAPIC timer instead of the C1E idle routine which switches to HPET, thus not waking up in deeper C-states: http://lkml.org/lkml/2011/4/24/20 Therefore, leave the original boundary starting with K8-revF. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 David Rientjes 提交于
ZONE_DMA is unnecessary for a large number of machines that do not require less than 32-bit DMA addressing, e.g. ISA legacy DMA or PCI cards with a restricted DMA address mask. This patch allows users to disable ZONE_DMA for x86 if they know they will not be using such devices with their kernel. This prevents the VM from unnecessarily reserving a ratio of memory (defaulting to 1/256th of system capacity) with lowmem_reserve_ratio for such allocations when it will never be used. Signed-off-by: NDavid Rientjes <rientjes@google.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.00.1105161353560.4353@chino.kir.corp.google.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Ondrej Zary 提交于
Steppings A1 and B0 of Celeron Covington are currently misdetected as Pentium II (Dixon). Fix it by removing the stepping check. [ hpa: this fixes this specific bug... the CPUID documentation specifies that the L2 cache size can disambiguate additional CPUs; this patch does not fix that. ] Signed-off-by: NOndrej Zary <linux@rainbow-software.org> Link: http://lkml.kernel.org/r/201105162138.15416.linux@rainbow-software.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Martin Schwidefsky 提交于
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch] at compile time and not in ftrace_call_adjust at run time. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-