- 13 11月, 2013 1 次提交
-
-
由 Jiri Slaby 提交于
Consider a kernel crash in a module, simulated the following way: static int my_init(void) { char *map = (void *)0x5; *map = 3; return 0; } module_init(my_init); When we turn off FRAME_POINTERs, the very first instruction in that function causes a BUG. The problem is that we print IP in the BUG report using %pB (from printk_address). And %pB decrements the pointer by one to fix printing addresses of functions with tail calls. This was added in commit 71f9e598 ("x86, dumpstack: Use %pB format specifier for stack trace") to fix the call stack printouts. So instead of correct output: BUG: unable to handle kernel NULL pointer dereference at 0000000000000005 IP: [<ffffffffa01ac000>] my_init+0x0/0x10 [pb173] We get: BUG: unable to handle kernel NULL pointer dereference at 0000000000000005 IP: [<ffffffffa0152000>] 0xffffffffa0151fff To fix that, we use %pS only for stack addresses printouts (via newly added printk_stack_address) and %pB for regs->ip (via printk_address). I.e. we revert to the old behaviour for all except call stacks. And since from all those reliable is 1, we remove that parameter from printk_address. Signed-off-by: NJiri Slaby <jslaby@suse.cz> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: joe@perches.com Cc: jirislaby@gmail.com Link: http://lkml.kernel.org/r/1382706418-8435-1-git-send-email-jslaby@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 11月, 2013 1 次提交
-
-
由 Ingo Molnar 提交于
This reverts commit 8eba1842. uv_trace() is not used by anything, nor is uv_trace_nmi_func, nor uv_trace_func. That's not how we do instrumentation code in the kernel: we add tracepoints, printk()s, etc. so that everyone not just those with magic kernel modules can debug a system. So remove this unused (and misguied) piece of code. Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: Mike Travis <travis@sgi.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Hedi Berriche <hedi@sgi.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Jason Wessel <jason.wessel@windriver.com> Link: http://lkml.kernel.org/n/tip-tumfBffmr4jmnt8Gyxanoblg@git.kernel.org
-
- 07 11月, 2013 2 次提交
-
-
由 Oleg Nesterov 提交于
Currently xol_get_insn_slot() assumes that we should simply copy arch_uprobe->insn[] which is (ignoring arch_uprobe_analyze_insn) just the copy of the original insn. This is not true for arm which needs to create another insn to execute it out-of-line. So this patch simply adds the new member, ->ixol into the union. This doesn't make any difference for x86 and powerpc, but arm can divorce insn/ixol and initialize the correct xol insn in arch_uprobe_analyze_insn(). Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-
由 David A. Long 提交于
Move the function declarations from the arch headers to the common header, since only the function bodies are architecture-specific. These changes are from Vincent Rabin's uprobes patch. [ oleg: update arch/powerpc/include/asm/uprobes.h ] Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-
- 31 10月, 2013 1 次提交
-
-
由 Greg Thelen 提交于
this_cpu_sub() is implemented as negation and addition. This patch casts the adjustment to the counter type before negation to sign extend the adjustment. This helps in cases where the counter type is wider than an unsigned adjustment. An alternative to this patch is to declare such operations unsupported, but it seemed useful to avoid surprises. This patch specifically helps the following example: unsigned int delta = 1 preempt_disable() this_cpu_write(long_counter, 0) this_cpu_sub(long_counter, delta) preempt_enable() Before this change long_counter on a 64 bit machine ends with value 0xffffffff, rather than 0xffffffffffffffff. This is because this_cpu_sub(pcp, delta) boils down to this_cpu_add(pcp, -delta), which is basically: long_counter = 0 + 0xffffffff Also apply the same cast to: __this_cpu_sub() __this_cpu_sub_return() this_cpu_sub_return() All percpu_test.ko passes, especially the following cases which previously failed: l -= ui_one; __this_cpu_sub(long_counter, ui_one); CHECK(l, long_counter, -1); l -= ui_one; this_cpu_sub(long_counter, ui_one); CHECK(l, long_counter, -1); CHECK(l, long_counter, 0xffffffffffffffff); ul -= ui_one; __this_cpu_sub(ulong_counter, ui_one); CHECK(ul, ulong_counter, -1); CHECK(ul, ulong_counter, 0xffffffffffffffff); ul = this_cpu_sub_return(ulong_counter, ui_one); CHECK(ul, ulong_counter, 2); ul = __this_cpu_sub_return(ulong_counter, ui_one); CHECK(ul, ulong_counter, 1); Signed-off-by: NGreg Thelen <gthelen@google.com> Acked-by: NTejun Heo <tj@kernel.org> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 10月, 2013 1 次提交
-
-
由 Matt Fleming 提交于
It's incredibly difficult to diagnose early EFI boot issues without special hardware because earlyprintk=vga doesn't work on EFI systems. Add support for writing to the EFI framebuffer, via earlyprintk=efi, which will actually give users a chance of providing debug output. Cc: H. Peter Anvin <hpa@zytor.com> Acked-by: NIngo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Jones <pjones@redhat.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 26 10月, 2013 2 次提交
-
-
由 Jan Beulich 提交于
Similarly to copy_from_user(), where the range check is to protect against kernel memory corruption, copy_to_user() can benefit from such checking too: Here it protects against kernel information leaks. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: <arjan@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/5265059502000078000FC4F6@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com>
-
由 Jan Beulich 提交于
Commits 4a312769 ("x86: Turn the copy_from_user check into an (optional) compile time warning") and 63312b6a ("x86: Add a Kconfig option to turn the copy_from_user warnings into errors") touched only the 32-bit variant of copy_from_user(), whereas the original commit 9f0cf4ad ("x86: Use __builtin_object_size() to validate the buffer size for copy_from_user()") also added the same code to the 64-bit one. Further the earlier conversion from an inline WARN() to the call to copy_from_user_overflow() went a little too far: When the number of bytes to be copied is not a constant (e.g. [looking at 3.11] in drivers/net/tun.c:__tun_chr_ioctl() or drivers/pci/pcie/aer/aer_inject.c:aer_inject_write()), the compiler will always have to keep the funtion call, and hence there will always be a warning. By using __builtin_constant_p() we can avoid this. And then this slightly extends the effect of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS in that apart from converting warnings to errors in the constant size case, it retains the (possibly wrong) warnings in the non-constant size case, such that if someone is prepared to get a few false positives, (s)he'll be able to recover the current behavior (except that these diagnostics now will never be converted to errors). Since the 32-bit variant (intentionally) didn't call might_fault(), the unification results in this being called twice now. Adding a suitable #ifdef would be the alternative if that's a problem. I'd like to point out though that with __compiletime_object_size() being restricted to gcc before 4.6, the whole construct is going to become more and more pointless going forward. I would question however that commit 2fb0815c ("gcc4: disable __compiletime_object_size for GCC 4.6+") was really necessary, and instead this should have been dealt with as is done here from the beginning. Signed-off-by: NJan Beulich <jbeulich@suse.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/5265056D02000078000FC4F3@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 10月, 2013 1 次提交
-
-
由 Chen, Gong 提交于
This H/W error log driver (a.k.a eMCA driver) is implemented based on http://www.intel.com/content/www/us/en/architecture-and-technology/enhanced-mca-logging-xeon-paper.html After errors are captured, more detailed platform specific information can be got via this new enhanced H/W error log driver. Most notably we can track memory errors back to the DIMM slot silk screen label. Signed-off-by: NChen, Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 18 10月, 2013 5 次提交
-
-
由 David Cohen 提交于
As Intel rolling out more SoC's after Moorestown, we need to re-structure the code in a way that is backward compatible and easy to expand. This patch implements a flexible way to support multiple boards and devices. This patch does not add any new functional support. It just refactors the existing code to increase the modularity and decrease the code duplication for supporting multiple soc's and boards. Currently intel-mid.c has both board and soc related code in one file. This patch moves the board related code to new files and let linker script to create SFI devite table following this: 1. Move the SFI device specific code to arch/x86/platform/intel-mid/device-libs/platform_<device>.* A new device file is added for every supported device. This code will get conditionally compiled by using corresponding device driver CONFIG option. 2. Move the device_ids location to .x86_intel_mid_dev.init section by using new sfi_device() macro. This patch was based on previous code from Sathyanarayanan Kuppuswamy. Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: http://lkml.kernel.org/r/1382049336-21316-13-git-send-email-david.a.cohen@linux.intel.comSigned-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
Moved SFI specific parsing/handling code to sfi.c. This will enable us to reuse our intel-mid code for platforms that supports firmware interfaces other than SFI (like ACPI). Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: http://lkml.kernel.org/r/1382049336-21316-10-git-send-email-david.a.cohen@linux.intel.comSigned-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
Added a custom handler for medfield based ipc devices and moved devs_id structure defintion to header file. Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: http://lkml.kernel.org/r/1382049336-21316-9-git-send-email-david.a.cohen@linux.intel.comSigned-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
mrst is used as common name to represent all intel_mid type soc's. But moorsetwon is just one of the intel_mid soc. So renamed them to use intel_mid. This patch mainly renames the variables and related functions that uses *mrst* prefix with *intel_mid*. To ensure that there are no functional changes, I have compared the objdump of related files before and after rename and found the only difference is symbol and name changes. Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: http://lkml.kernel.org/r/1382049336-21316-6-git-send-email-david.a.cohen@linux.intel.comSigned-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
Following files contains code that is common to all intel mid soc's. So renamed them as below. mrst/mrst.c -> intel-mid/intel-mid.c mrst/vrtc.c -> intel-mid/intel_mid_vrtc.c mrst/early_printk_mrst.c -> intel-mid/intel_mid_vrtc.c pci/mrst.c -> pci/intel_mid_pci.c Also, renamed the corresponding header files and made changes to the driver files that included these header files. To ensure that there are no functional changes, I have compared the objdump of renamed files before and after rename and found that the only difference is file name change. Signed-off-by: NKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Link: http://lkml.kernel.org/r/1382049336-21316-4-git-send-email-david.a.cohen@linux.intel.comSigned-off-by: NDavid Cohen <david.a.cohen@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 11 10月, 2013 3 次提交
-
-
由 Ingo Molnar 提交于
Apply the asm_volatile_goto() compiler quirk to the new rmwcc.h file as well, introduced in: c2daa3be sched, x86: Provide a per-cpu preempt_count implementation Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com> Reported-by: NOleg Nesterov <oleg@redhat.com> Reported-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: NJakub Jelinek <jakub@redhat.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto' constructs, as outlined here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 Implement a workaround suggested by Jakub Jelinek. Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com> Reported-by: NOleg Nesterov <oleg@redhat.com> Reported-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: NJakub Jelinek <jakub@redhat.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 K. Y. Srinivasan 提交于
Hyper-V supports a mechanism for retrieving the local APIC frequency. Use this and bypass the calibration code in the kernel . This would allow us to boot the Linux kernel as a "modern VM" on Hyper-V where many of the legacy devices (such as PIT) are not emulated. I would like to thank Olaf Hering <olaf@aepfle.de>, Jan Beulich <JBeulich@suse.com> and H. Peter Anvin <h.peter.anvin@intel.com> for their help in this effort. In this version of the patch, I have addressed Jan's comments. Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com> Link: http://lkml.kernel.org/r/1380554932-9888-1-git-send-email-olaf@aepfle.deTested-by: NOlaf Hering <olaf@aepfle.de> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 28 9月, 2013 2 次提交
-
-
由 Borislav Petkov 提交于
As the new x86 CPU bootup printout format code maintainer, I am taking immediate action to improve and clean (and thus indulge my OCD) the reporting of the cores when coming up online. Fix padding to a right-hand alignment, cleanup code and bind reporting width to the max number of supported CPUs on the system, like this: [ 0.074509] smpboot: Booting Node 0, Processors: #1 #2 #3 #4 #5 #6 #7 OK [ 0.644008] smpboot: Booting Node 1, Processors: #8 #9 #10 #11 #12 #13 #14 #15 OK [ 1.245006] smpboot: Booting Node 2, Processors: #16 #17 #18 #19 #20 #21 #22 #23 OK [ 1.864005] smpboot: Booting Node 3, Processors: #24 #25 #26 #27 #28 #29 #30 #31 OK [ 2.489005] smpboot: Booting Node 4, Processors: #32 #33 #34 #35 #36 #37 #38 #39 OK [ 3.093005] smpboot: Booting Node 5, Processors: #40 #41 #42 #43 #44 #45 #46 #47 OK [ 3.698005] smpboot: Booting Node 6, Processors: #48 #49 #50 #51 #52 #53 #54 #55 OK [ 4.304005] smpboot: Booting Node 7, Processors: #56 #57 #58 #59 #60 #61 #62 #63 OK [ 4.961413] Brought up 64 CPUs and this: [ 0.072367] smpboot: Booting Node 0, Processors: #1 #2 #3 #4 #5 #6 #7 OK [ 0.686329] Brought up 8 CPUs Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Libin <huawei.libin@huawei.com> Cc: wangyijing@huawei.com Cc: fenghua.yu@intel.com Cc: guohanjun@huawei.com Cc: paul.gortmaker@windriver.com Link: http://lkml.kernel.org/r/20130927143554.GF4422@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Yuanhan reported a serious throughput regression in his pigz benchmark. Using the ftrace patch I found that several idle paths need more TLC before we can switch the generic need_resched() over to preempt_need_resched. The preemption paths benefit most from preempt_need_resched and do indeed use it; all other need_resched() users don't really care that much so reverting need_resched() back to tif_need_resched() is the simple and safe solution. Reported-by: NYuanhan Liu <yuanhan.liu@linux.intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: lkp@linux.intel.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 9月, 2013 5 次提交
-
-
由 David Vrabel 提交于
On hosts with more than 168 GB of memory, a 32-bit guest may attempt to grant map an MFN that is error cannot lookup in its mapping of the m2p table. There is an m2p lookup as part of m2p_add_override() and m2p_remove_override(). The lookup falls off the end of the mapped portion of the m2p and (because the mapping is at the highest virtual address) wraps around and the lookup causes a fault on what appears to be a user space address. do_page_fault() (thinking it's a fault to a userspace address), tries to lock mm->mmap_sem. If the gntdev device is used for the grant map, m2p_add_override() is called from from gnttab_mmap() with mm->mmap_sem already locked. do_page_fault() then deadlocks. The deadlock would most commonly occur when a 64-bit guest is started and xenconsoled attempts to grant map its console ring. Introduce mfn_to_pfn_no_overrides() which checks the MFN is within the mapped portion of the m2p table before accessing the table and use this in m2p_add_override(), m2p_remove_override(), and mfn_to_pfn() (which already had the correct range check). All faults caused by accessing the non-existant parts of the m2p are thus within the kernel address space and exception_fixup() is called without trying to lock mm->mmap_sem. This means that for MFNs that are outside the mapped range of the m2p then mfn_to_pfn() will always look in the m2p overrides. This is correct because it must be a foreign MFN (and the PFN in the m2p in this case is only relevant for the other domain). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@citrix.com> Cc: Jan Beulich <JBeulich@suse.com> -- v3: check for auto_translated_physmap in mfn_to_pfn_no_overrides() v2: in mfn_to_pfn() look in m2p_overrides if the MFN is out of range as it's probably foreign. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Peter Zijlstra 提交于
Remove the bloat of the C calling convention out of the preempt_enable() sites by creating an ASM wrapper which allows us to do an asm("call ___preempt_schedule") instead. calling.h bits by Andi Kleen Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-tk7xdi1cvvxewixzke8t8le1@git.kernel.org [ Fixed build error. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Convert x86 to use a per-cpu preemption count. The reason for doing so is that accessing per-cpu variables is a lot cheaper than accessing thread_info variables. We still need to save/restore the actual preemption count due to PREEMPT_ACTIVE so we place the per-cpu __preempt_count variable in the same cache-line as the other hot __switch_to() variables such as current_task. NOTE: this save/restore is required even for !PREEMPT kernels as cond_resched() also relies on preempt_count's PREEMPT_ACTIVE to ignore task_struct::state. Also rename thread_info::preempt_count to ensure nobody is 'accidentally' still poking at it. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-gzn5rfsf8trgjoqx8hyayy3q@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
In order to prepare to per-arch implementations of preempt_count move the required bits into an asm-generic header and use this for all archs. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-h5j0c1r3e3fk015m30h8f1zx@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Linus suggested using asm goto to get rid of the typical SETcc + TEST instruction pair -- which also clobbers an extra register -- for our typical modify_and_test() functions. Because asm goto doesn't allow output fields it has to include an unconditinal memory clobber when it changes a memory variable to force a reload. Luckily all atomic ops already imply a compiler barrier to go along with their memory barrier semantics. Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-0mtn9siwbeo1d33bap1422se@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 9月, 2013 3 次提交
-
-
由 Mike Travis 提交于
This patch adds support for the uvtrace module by providing a skeleton call to the registered trace function. It also provides another separate 'NMI' tracer that is triggered by the system wide 'power nmi' command. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NDimitri Sivanich <sivanich@sgi.com> Reviewed-by: NHedi Berriche <hedi@sgi.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Jason Wessel <jason.wessel@windriver.com> Link: http://lkml.kernel.org/r/20130923212501.185052551@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mike Travis 提交于
The current UV NMI handler has not been updated for the changes in the system NMI handler and the perf operations. The UV NMI handler reads an MMR in the UV Hub to check to see if the NMI event was caused by the external 'system NMI' that the operator can initiate on the System Mgmt Controller. The problem arises when the perf tools are running, causing millions of perf events per second on very large CPU count systems. Previously this was okay because the perf NMI handler ran at a higher priority on the NMI call chain and if the NMI was a perf event, it would stop calling other NMI handlers remaining on the NMI call chain. Now the system NMI handler calls all the handlers on the NMI call chain including the UV NMI handler. This causes the UV NMI handler to read the MMRs at the same millions per second rate. This can lead to significant performance loss and possible system failures. It also can cause thousands of 'Dazed and Confused' messages being sent to the system console. This effectively makes perf tools unusable on UV systems. To avoid this excessive overhead when perf tools are running, this code has been optimized to minimize reading of the MMRs as much as possible, by moving to the NMI_UNKNOWN notifier chain. This chain is called only when all the users on the standard NMI_LOCAL call chain have been called and none of them have claimed this NMI. There is an exception where the NMI_LOCAL notifier chain is used. When the perf tools are in use, it's possible that the UV NMI was captured by some other NMI handler and then either ignored or mistakenly processed as a perf event. We set a per_cpu ('ping') flag for those CPUs that ignored the initial NMI, and then send them an IPI NMI signal. The NMI_LOCAL handler on each cpu does not need to read the MMR, but instead checks the in memory flag indicating it was pinged. There are two module variables, 'ping_count' indicating how many requested NMI events occurred, and 'ping_misses' indicating how many stray NMI events. These most likely are perf events so it shows the overhead of the perf NMI interrupts and how many MMR reads were avoided. This patch also minimizes the reads of the MMRs by having the first cpu entering the NMI handler on each node set a per HUB in-memory atomic value. (Having a per HUB value avoids sending lock traffic over NumaLink.) Both types of UV NMIs from the SMI layer are supported. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NDimitri Sivanich <sivanich@sgi.com> Reviewed-by: NHedi Berriche <hedi@sgi.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Jason Wessel <jason.wessel@windriver.com> Link: http://lkml.kernel.org/r/20130923212500.353547733@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mike Travis 提交于
This patch moves the UV NMI support from the x2apic file to a new separate uv_nmi.c file in preparation for the next sequence of patches. It prevents upcoming bloat of the x2apic file, and has the added benefit of putting the upcoming /sys/module parameters under the name 'uv_nmi' instead of 'x2apic_uv_x', which was obscure. Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NDimitri Sivanich <sivanich@sgi.com> Reviewed-by: NHedi Berriche <hedi@sgi.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: Jason Wessel <jason.wessel@windriver.com> Link: http://lkml.kernel.org/r/20130923212500.183295611@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 9月, 2013 2 次提交
-
-
由 Cyrill Gorcunov 提交于
_PAGE_SOFT_DIRTY bit should never be set on present pte so add VM_BUG_ON to catch any potential future abuse. Also add a comment on _PAGE_SWP_SOFT_DIRTY definition explaining scope of its usage. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Acked-by: NPavel Emelyanov <xemul@parallels.com> Acked-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Hansen 提交于
The previous patch doing vmstats for TLB flushes ("mm: vmstats: tlb flush counters") effectively missed UP since arch/x86/mm/tlb.c is only compiled for SMP. UP systems do not do remote TLB flushes, so compile those counters out on UP. arch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly. This is probably an optimization since both the mtrr code and __flush_tlb() write cr4. It would probably be safe to make that a flush_tlb_all() (and then get these statistics), but the mtrr code is ancient and I'm hesitant to touch it other than to just stick in the counters. [akpm@linux-foundation.org: tweak comments] Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 9月, 2013 1 次提交
-
-
由 Andi Kleen 提交于
The 64bit __copy_{from,to}_user_inatomic always called copy_from_user_generic, but skipped the special optimizations for 1/2/4/8 byte accesses. This especially hurts the futex call, which accesses the 4 byte futex user value with a complicated fast string operation in a function call, instead of a single movl. Use __copy_{from,to}_user for _inatomic instead to get the same optimizations. The only problem was the might_fault() in those functions. So move that to new wrapper and call __copy_{f,t}_user_nocheck() from *_inatomic directly. 32bit already did this correctly by duplicating the code. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1376687844-19857-2-git-send-email-andi@firstfloor.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 03 9月, 2013 1 次提交
-
-
由 Linus Torvalds 提交于
Instead of taking the spinlock, the lockless versions atomically check that the lock is not taken, and do the reference count update using a cmpxchg() loop. This is semantically identical to doing the reference count update protected by the lock, but avoids the "wait for lock" contention that you get when accesses to the reference count are contended. Note that a "lockref" is absolutely _not_ equivalent to an atomic_t. Even when the lockref reference counts are updated atomically with cmpxchg, the fact that they also verify the state of the spinlock means that the lockless updates can never happen while somebody else holds the spinlock. So while "lockref_put_or_lock()" looks a lot like just another name for "atomic_dec_and_lock()", and both optimize to lockless updates, they are fundamentally different: the decrement done by atomic_dec_and_lock() is truly independent of any lock (as long as it doesn't decrement to zero), so a locked region can still see the count change. The lockref structure, in contrast, really is a *locked* reference count. If you hold the spinlock, the reference count will be stable and you can modify the reference count without using atomics, because even the lockless updates will see and respect the state of the lock. In order to enable the cmpxchg lockless code, the architecture needs to do three things: (1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit in an aligned u64, and have a "cmpxchg()" implementation that works on such a u64 data type. (2) define a helper function to test for a spinlock being unlocked ("arch_spin_value_unlocked()") (3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its Kconfig file. This enables it for x86-64 (but not 32-bit, we'd need to make sure cmpxchg() turns into the proper cmpxchg8b in order to enable it for 32-bit mode). Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 9月, 2013 1 次提交
-
-
由 H. Peter Anvin 提交于
Add SMAP annotations to csum_partial_copy_to/from_user(). These functions legitimately access user space and thus need to set the AC flag. TODO: add explicit checks that the side with the kernel space pointer really points into kernel space. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-2aps0u00eer658fd5xyanan7@git.kernel.org Cc: <stable@vger.kernel.org> # v3.7+
-
- 30 8月, 2013 3 次提交
-
-
由 H. Peter Anvin 提交于
Update comment in uaccess.h to reflect the changes for clang support: gcc only cares about the base register (most architectures don't encode the size of the operation in the operands like x86 does, and so it is treated effectively like a register number), whereas clang tries to enforce the size -- but not for register pairs. Link: http://lkml.kernel.org/r/1377803585-5913-3-git-send-email-dl9pf@gmx.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Jan-Simon Möller <dl9pf@gmx.de>
-
由 Jan-Simon Möller 提交于
Clang does not support the "shortcut" we're taking here for gcc (see below). The patch uses the macro _ASM_DX to do the job. From arch/x86/include/asm/uaccess.h: /* * Careful: we have to cast the result to the type of the pointer * for sign reasons. * * The use of %edx as the register specifier is a bit of a * simplification, as gcc only cares about it as the starting point * and not size: for a 64-bit value it will use %ecx:%edx on 32 bits * (%ecx being the next register in gcc's x86 register sequence), and * %rdx on 64 bits. */ [ hpa: I consider this a compatibility bug in clang as this reflects a bit of a misunderstanding about how register strings are used by gcc, but the workaround is straightforward and there is no particular reason to not do it. ] Signed-off-by: NJan-Simon Möller <dl9pf@gmx.de> Link: http://lkml.kernel.org/r/1377803585-5913-3-git-send-email-dl9pf@gmx.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jan-Simon Möller 提交于
The __ASM_* macros (e.g. __ASM_DX) are used to return the proper register name (e.g. edx for 32bit / rdx for 64bit). We want to use this also in arch/x86/include/asm/uaccess.h / get_user() . For this to work, we need a raw form as both gcc and clang choke on the whitespace in a register asm() statement, and the __ASM_FORM macro surrounds the argument with blanks. A new macro, __ASM_FORM_RAW was added and we change __ASM_REG to use the new RAW form. Signed-off-by: NJan-Simon Möller <dl9pf@gmx.de> Link: http://lkml.kernel.org/r/1377803585-5913-2-git-send-email-dl9pf@gmx.deSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 27 8月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
This patch cleans the initialization of dma contiguous framework. The all-in-one dma_declare_contiguous() function is now separated into dma_contiguous_reserve_area() which only steals the the memory from memblock allocator and dma_contiguous_add_device() function, which assigns given device to the specified reserved memory area. This improves the flexibility in defining contiguous memory areas and assigning device to them, because now it is possible to assign more than one device to the given contiguous memory area. Such split in initialization procedure is also required for upcoming device tree support. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Acked-by: NTomasz Figa <t.figa@samsung.com>
-
- 26 8月, 2013 2 次提交
-
-
由 Srivatsa Vaddagiri 提交于
kvm_hc_kick_cpu allows the calling vcpu to kick another vcpu out of halt state. the presence of these hypercalls is indicated to guest via kvm_feature_pv_unhalt. Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration During migration, any vcpu that got kicked but did not become runnable (still in halted state) should be runnable after migration. Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: NSuzuki Poulose <suzuki@in.ibm.com> [Raghu: Apic related changes, folding pvunhalted into vcpu_runnable Added flags for future use (suggested by Gleb)] [ Raghu: fold pv_unhalt flag as suggested by Eric Northup] Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: NGleb Natapov <gleb@redhat.com> Acked-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Raghavendra K T 提交于
this is needed by both guest and host. Originally-from: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Acked-by: NGleb Natapov <gleb@redhat.com> Acked-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 21 8月, 2013 1 次提交
-
-
由 John Haxby 提交于
This affects xen pv guests with sufficiently old versions of xen and sufficiently new hardware. On such a system, a guest with a btrfs root won't even boot. Signed-off-by: NJohn Haxby <john.haxby@oracle.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 20 8月, 2013 1 次提交
-
-
由 Yoshihiro YUNOMAE 提交于
Prevent crash_kexec() from deadlocking on ioapic_lock. When crash_kexec() is executed on a CPU, the CPU will take ioapic_lock in disable_IO_APIC(). So if the cpu gets an NMI while locking ioapic_lock, a deadlock will happen. In this patch, ioapic_lock is zapped/initialized before disable_IO_APIC(). You can reproduce this deadlock the following way: 1. Add mdelay(1000) after raw_spin_lock_irqsave() in native_ioapic_set_affinity()@arch/x86/kernel/apic/io_apic.c Although the deadlock can occur without this modification, it will increase the potential of the deadlock problem. 2. Build and install the kernel 3. Set up the OS which will run panic() and kexec when NMI is injected # echo "kernel.unknown_nmi_panic=1" >> /etc/sysctl.conf # vim /etc/default/grub add "nmi_watchdog=0 crashkernel=256M" in GRUB_CMDLINE_LINUX line # grub2-mkconfig 4. Reboot the OS 5. Run following command for each vcpu on the guest # while true; do echo <CPU num> > /proc/irq/<IO-APIC-edge or IO-APIC-fasteoi>/smp_affinitity; done; By running this command, cpus will get ioapic_lock for setting affinity. 6. Inject NMI (push a dump button or execute 'virsh inject-nmi <domain>' if you use VM). After injecting NMI, panic() is called in an nmi-handler context. Then, kexec will normally run in panic(), but the operation will be stopped by deadlock on ioapic_lock in crash_kexec()->machine_crash_shutdown()-> native_machine_crash_shutdown()->disable_IO_APIC()->clear_IO_APIC()-> clear_IO_APIC_pin()->ioapic_read_entry(). Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: yrl.pp-manager.tt@hitachi.com Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Link: http://lkml.kernel.org/r/20130820070107.28245.83806.stgit@yunodevelSigned-off-by: NIngo Molnar <mingo@kernel.org>
-