- 20 10月, 2012 1 次提交
-
-
由 Yan, Zheng 提交于
Initializing uncore PMU on virtualized CPU may hang the kernel. This is because kvm does not emulate the entire hardware. Thers are lots of uncore related MSRs, making kvm enumerate them all is a non-trival task. So just disable uncore on virtualized CPU. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Tested-by: NPekka Enberg <penberg@kernel.org> Cc: a.p.zijlstra@chello.nl Cc: eranian@google.com Cc: andi@firstfloor.org Cc: avi@redhat.com Link: http://lkml.kernel.org/r/1345540117-14164-1-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 10月, 2012 1 次提交
-
-
由 Borislav Petkov 提交于
450cc201 ("x86/mce: Provide boot argument to honour bios-set CMCI threshold") added the bios_cmci_threshold sysfs attribute which was supposed to communicate to userspace tools that BIOS CMCI threshold has been honoured. However, this info is not of any importance to userspace - it should rather get the actual error count it has been thresholded already from MCi_STATUS[38:52]. So drop this before it becomes a used interface (good thing we caught this early in 3.7-rc1, right after the merge window closed). Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/20121017105940.GA14590@x1.osrc.amd.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
- 18 10月, 2012 1 次提交
-
-
由 Daniel J Blueman 提交于
When booting on a federated multi-server system (NumaScale), the processor Northbridge lookup returns NULL; add guards to prevent this causing an oops. On those systems, the northbridge is accessed through MMIO and the "normal" northbridge enumeration in amd_nb.c doesn't work since we're generating the northbridge ID from the initial APIC ID and the last is not unique on those systems. Long story short, we end up without northbridge descriptors. Signed-off-by: NDaniel J Blueman <daniel@numascale-asia.com> Cc: stable@vger.kernel.org # 3.6 Link: http://lkml.kernel.org/r/1349073725-14093-1-git-send-email-daniel@numascale-asia.com [ Boris: beef up commit message ] Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 16 10月, 2012 1 次提交
-
-
由 Peter Zijlstra 提交于
Intel PEBS in VT-x context uses the DS address as a guest linear address, even though its programmed by the host as a host linear address. This either results in guest memory corruption and or the hardware faulting and 'crashing' the virtual machine. Therefore we have to disable PEBS on VT-x enter and re-enable on VT-x exit, enforcing a strict exclude_guest. This patch enforces exclude_guest kernel side. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Avi Kivity <avi@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Robert Richter <robert.richter@amd.com> Link: http://lkml.kernel.org/r/1347569955-54626-3-git-send-email-dsahern@gmail.comSigned-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 05 10月, 2012 1 次提交
-
-
由 Robert Richter 提交于
Add sysfs format entries for AMD IBS PMUs: # find /sys/bus/event_source/devices/ibs_*/format /sys/bus/event_source/devices/ibs_fetch/format /sys/bus/event_source/devices/ibs_fetch/format/rand_en /sys/bus/event_source/devices/ibs_op/format /sys/bus/event_source/devices/ibs_op/format/cnt_ctl This allows to specify following IBS options: $ perf record -e ibs_fetch/rand_en=1/GH ... $ perf record -e ibs_op/cnt_ctl=1/GH ... Option cnt_ctl is only enabled if the IBS_CAPS_OPCNT bit is set in IBS cpuid feature flags (AMD family 10h RevC and above). Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1347447584-28405-1-git-send-email-robert.richter@amd.com [ Added small readability improvements. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 04 10月, 2012 2 次提交
-
-
由 Vince Weaver 提交于
The following patch adds perf_event support for the Xeon-Phi PMU, as documented in the "Intel Xeon Phi Coprocessor (codename: Knights Corner) Performance Monitoring Units" manual. Even though it is a co-processor, a Phi runs a full Linux environment and can support performance counters. This is just barebones support, it does not add support for interesting new features such as the SPFLT intruction that allows starting/stopping events without entering the kernel. The PMU internally is just like that of an original Pentium, but a "P6-like" MSR interface is provided. The interface is different enough from a real P6 that it's not easy (or practical) to re-use the code in perf_event_p6.c Acked-by: NLawrence F Meadows <lawrence.f.meadows@intel.com> Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: NVince Weaver <vincent.weaver@maine.edu> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Cc: eranian@gmail.com Cc: Lawrence F <lawrence.f.meadows@intel.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1209261405320.8398@vincent-weaver-1.um.maine.eduSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dan Carpenter 提交于
Using ARRAY_SIZE() is more readable. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Shai Fultheim <shai@scalemp.com> Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121002083409.GM12398@elgon.mountainSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 10月, 2012 1 次提交
-
-
由 David Howells 提交于
Partition the header include path flags into two sets, one for kernelspace builds and one for userspace builds. Add the following directories to build after the ordinary include directories so that #include will pick up the UAPI header directly if the kernel header has been moved there. The userspace set (represented by the USERINCLUDE make variable) contains: -I $(srctree)/arch/$(hdr-arch)/include/uapi -I arch/$(hdr-arch)/include/generated/uapi -I $(srctree)/include/uapi -I include/generated/uapi -include $(srctree)/include/linux/kconfig.h and the kernelspace set (represented by the LINUXINCLUDE make variable) contains: -I $(srctree)/arch/$(hdr-arch)/include -I arch/$(hdr-arch)/include/generated -I $(srctree)/include -I include --- if not building in the source tree plus everything in the USERINCLUDE set. Then use USERINCLUDE in building the x86 boot code. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
- 28 9月, 2012 2 次提交
-
-
由 Naveen N. Rao 提交于
The ACPI spec doesn't provide for a way for the bios to pass down recommended thresholds to the OS on a _per-bank_ basis. This patch adds a new boot option, which if passed, tells Linux to use CMCI thresholds set by the bios. As fail-safe, we initialize threshold to 1 if some banks have not been initialized by the bios and warn the user. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 H. Peter Anvin 提交于
There is no fundamental reason why we should switch SMEP and SMAP on during early cpu initialization just to switch them off again. Now with %eflags and %cr4 forced to be initialized to a clean state, we only need the one-way enable. Also, make the functions inline to make them (somewhat) harder to abuse. This does mean that SMEP and SMAP do not get initialized anywhere near as early. Even using early_param() instead of __setup() doesn't give us control early enough to do this during the early cpu initialization phase. This seems reasonable to me, because SMEP and SMAP should not matter until we have userspace to protect ourselves from, but it does potentially make it possible for a bug involving a "leak of permissions to userspace" to get uncaught. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 27 9月, 2012 1 次提交
-
-
由 Yan, Zheng 提交于
The variable box should not be declared as static. This could probably cause crashes with sufficiently parallel uncore PMU use. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Cc: eranian@google.com Cc: a.p.zijlstra@chello.nl Link: http://lkml.kernel.org/r/1348709606-2759-1-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 26 9月, 2012 1 次提交
-
-
由 Michael Wang 提交于
Since 'cpu == -1' in cpumask_next() is legal, no need to handle '*pos == 0' specially. About the comments: /* just in case, cpu 0 is not the first */ A test with a cpumask in which cpu 0 is not the first has been done, and it works well. This patch will remove that useless branch to clean the code. Signed-off-by: NMichael Wang <wangyun@linux.vnet.ibm.com> Cc: kjwinchester@gmail.com Cc: borislav.petkov@amd.com Cc: ak@linux.intel.com Link: http://lkml.kernel.org/r/1348033343-23658-1-git-send-email-wangyun@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 22 9月, 2012 2 次提交
-
-
由 H. Peter Anvin 提交于
If Supervisor Mode Access Prevention is available and not disabled by the user, turn it on. Also fix the expansion of SMEP (Supervisor Mode Execution Prevention.) Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-10-git-send-email-hpa@linux.intel.com
-
由 H. Peter Anvin 提交于
When Supervisor Mode Access Prevention (SMAP) is enabled, access to userspace from the kernel is controlled by the AC flag. To make the performance of manipulating that flag acceptable, there are two new instructions, STAC and CLAC, to set and clear it. This patch adds those instructions, via alternative(), when the SMAP feature is enabled. It also adds X86_EFLAGS_AC unconditionally to the SYSCALL entry mask; there is simply no reason to make that one conditional. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1348256595-29119-9-git-send-email-hpa@linux.intel.com
-
- 19 9月, 2012 4 次提交
-
-
由 Stephane Eranian 提交于
This patch updates the existing Intel IvyBridge (model 58) support with proper PEBS event constraints. It cannot reuse the same as SandyBridge because some events (0xd3) are specific to IvyBridge. Also there is no UOPS_DISPATCHED.THREAD on IVB, so do not populate the PERF_COUNT_HW_STALLED_CYCLES_BACKEND mapping. Signed-off-by: NStephane Eranian <eranian@google.com> Cc: peterz@infradead.org Cc: ak@linux.intel.com Link: http://lkml.kernel.org/r/20120910230701.GA5898@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Borislav Petkov 提交于
When acting on a user bug report, we find ourselves constantly asking for /proc/cpuinfo in order to know the exact family, model, stepping of the CPU in question. Instead of having to ask this, add this to dmesg so that it is visible and no ambiguities can ensue from looking at the official name string of the CPU coming from CPUID and trying to map it to f/m/s. Output then looks like this: [ 0.146041] smpboot: CPU0: AMD FX(tm)-8100 Eight-Core Processor (fam: 15, model: 01, stepping: 02) Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Cc: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/1347640666-13638-1-git-send-email-bp@amd64.org [ tweaked it minimally to add commas. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Suresh Siddha 提交于
Decouple non-lazy/eager fpu restore policy from the existence of the xsave feature. Introduce a synthetic CPUID flag to represent the eagerfpu policy. "eagerfpu=on" boot paramter will enable the policy. Requested-by: NH. Peter Anvin <hpa@zytor.com> Requested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1347300665-6209-2-git-send-email-suresh.b.siddha@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Suresh Siddha 提交于
Fundamental model of the current Linux kernel is to lazily init and restore FPU instead of restoring the task state during context switch. This changes that fundamental lazy model to the non-lazy model for the processors supporting xsave feature. Reasons driving this model change are: i. Newer processors support optimized state save/restore using xsaveopt and xrstor by tracking the INIT state and MODIFIED state during context-switch. This is faster than modifying the cr0.TS bit which has serializing semantics. ii. Newer glibc versions use SSE for some of the optimized copy/clear routines. With certain workloads (like boot, kernel-compilation etc), application completes its work with in the first 5 task switches, thus taking upto 5 #DNA traps with the kernel not getting a chance to apply the above mentioned pre-load heuristic. iii. Some xstate features (like AMD's LWP feature) don't honor the cr0.TS bit and thus will not work correctly in the presence of lazy restore. Non-lazy state restore is needed for enabling such features. Some data on a two socket SNB system: * Saved 20K DNA exceptions during boot on a two socket SNB system. * Saved 50K DNA exceptions during kernel-compilation workload. * Improved throughput of the AVX based checksumming function inside the kernel by ~15% as xsave/xrstor is faster than the serializing clts/stts pair. Also now kernel_fpu_begin/end() relies on the patched alternative instructions. So move check_fpu() which uses the kernel_fpu_begin/end() after alternative_instructions(). Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1345842782-24175-7-git-send-email-suresh.b.siddha@intel.com Merge 32-bit boot fix from, Link: http://lkml.kernel.org/r/1347300665-6209-4-git-send-email-suresh.b.siddha@intel.com Cc: Jim Kukunas <james.t.kukunas@linux.intel.com> Cc: NeilBrown <neilb@suse.de> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 18 9月, 2012 1 次提交
-
-
由 Yan, Zheng 提交于
This patch adds a cpumask file to the uncore pmu sysfs directory. The cpumask file contains one active cpu for every socket. Signed-off-by: N"Yan, Zheng" <zheng.z.yan@intel.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NIngo Molnar <mingo@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: "Yan, Zheng" <zheng.z.yan@intel.com> Link: http://lkml.kernel.org/r/1347263631-23175-2-git-send-email-zheng.z.yan@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 13 9月, 2012 3 次提交
-
-
由 Ian Campbell 提交于
On 64 bit x86 we save the current eflags in cpu_init for use in ret_from_fork. Strictly speaking reserved bits in EFLAGS should be read as written but in practise it is unlikely that EFLAGS could ever be extended in this way and the kernel alread clears any undefined flags early on. The equivalent 32 bit code simply hard codes 0x0202 as the new EFLAGS. This change makes 64 bit use the same mechanism to setup the initial EFLAGS on fork. Note that 64 bit resets EFLAGS before calling schedule_tail() as opposed to 32 bit which calls schedule_tail() first. Therefore the correct value for EFLAGS has opposite IF bit. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Acked-by: NAndi Kleen <ak@linux.intel.com> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Pekka Enberg <penberg@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/20120824195847.GA31628@moonSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Robert Richter 提交于
Current implementation simply ignores attribute flags. Thus, there is no notification to userland of unsupported features. Check syscall's attribute flags to let userland know if a feature is supported by the kernel. This is also needed to distinguish between future kernels what might support a feature. Cc: <stable@vger.kernel.org> v3.5.. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120910093018.GO8285@erda.amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Stephane Eranian 提交于
This patch exports the clockticks event and its encoding to user level. The clockticks event was exported for Nehalem/Westmere but not for Sandy Bridge (client). Given that it uses a special encoding, it needs to be exported to user tools, so users can do: # perf stat -a -C 0 -e uncore_cbox_0/clockticks/ sleep 1 Signed-off-by: NStephane Eranian <eranian@google.com> Acked-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120829130122.GA32336@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 04 9月, 2012 1 次提交
-
-
由 Stephane Eranian 提交于
This patch enables perf_events support for Intel Cedarview Atom (model 54) processors. Support includes PEBS and LBR. Tested on my Atom N2600 netbook. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120820092421.GA11284@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 14 8月, 2012 4 次提交
-
-
由 Gleb Natapov 提交于
If PMU counter has PEBS enabled it is not enough to disable counter on a guest entry since PEBS memory write can overshoot guest entry and corrupt guest memory. Disabling PEBS during guest entry solves the problem. Tested-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120809085234.GI3341@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Yan, Zheng 提交于
The Westmere-EX uncore is similar to the Nehalem-EX uncore. The differences are: - Westmere-EX uncore has 10 instances of Cbox. The MSRs for Cbox8 and Cbox9 in the Westmere-EX aren't contiguous with Cbox 0~7. - The fvid field in the ZDP_CTL_FVC register in the Mbox is different. It's 5 bits in the Nehalem-EX, 6 bits in the Westmere-EX. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1344229882-3907-3-git-send-email-zheng.z.yan@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Yan, Zheng 提交于
This patch includes following fixes and update: - Only some events in the Sbox and Mbox can use the match/mask registers, add code to check this. - The format definitions for xbr_mm_cfg and xbr_match registers in the Rbox are wrong, xbr_mm_cfg should use 32 bits, xbr_match should use 64 bits. - Cleanup the Rbox code. Compute the addresses extra registers in the enable_event function instead of the hw_config function. This simplifies the code in nhmex_rbox_alter_er(). Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1344229882-3907-2-git-send-email-zheng.z.yan@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Borislav Petkov 提交于
Fix the following section mismatch: WARNING: arch/x86/kernel/cpu/built-in.o(.text+0x7ad9): Section mismatch in reference from the function uncore_types_exit() to the function .init.text:uncore_type_exit() The function uncore_types_exit() references the function __init uncore_type_exit(). This is often because uncore_types_exit lacks a __init annotation or the annotation of uncore_type_exit is wrong. caused by 14371cce ("perf: Add generic PCI uncore PMU device support"). Cc: Zheng Yan <zheng.z.yan@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1339741902-8449-8-git-send-email-zheng.z.yan@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 10 8月, 2012 2 次提交
-
-
由 Chen Gong 提交于
On Intel systems corrected machine check interrupts (CMCI) may be sent to multiple logical processors; possibly to all processors on the affected socket (SDM Volume 3B "15.5.1 CMCI Local APIC Interface"). This means that a persistent error (such as a stuck bit in ECC memory) may cause a storm of interrupts that greatly hinders or prevents forward progress (probably on many processors). To solve this we keep track of the rate at which each processor sees CMCI. If we exceed a threshold, we disable CMCI delivery and switch to polling the machine check banks. If the storm subsides (none of the affected processors see any more errors for a complete poll interval) we re-enable CMCI. [Tony: Added console messages when storm begins/ends and increased storm threshold from 5 to 15 so we have a few more logged entries before we disable interrupts and start dropping reports] Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
cmci_discover() works out which machine check banks support CMCI, and which of those are shared by multiple logical processors. It uses this information to ensure that exactly one cpu is designated the owner of each bank so that when interrupts are broadcast to multiple cpus, only one of them will look in a shared bank to log the error and clear the bank. At boot time cmci_discover() performs this task silently. But during certain cpu hotplug operations it prints out a set of summary lines like this: CPU 35 MCA banks CMCI:0 CMCI:1 CMCI:3 CMCI:5 CMCI:6 CMCI:7 CMCI:8 CMCI:9 CMCI:10 CMCI:11 CPU 1 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 39 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 38 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 32 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 37 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 36 MCA banks CMCI:0 CMCI:1 CMCI:3 CPU 34 MCA banks CMCI:0 CMCI:1 CMCI:3 The value of these messages seems very low. A user might painstakingly cross-check against the data sheet for a processor to ensure that all CMCI supported banks are correctly reported, but this seems improbable. If users really wanted to do this, we should print the information at boot time too. Remove the messages. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 09 8月, 2012 1 次提交
-
-
由 Suresh Siddha 提交于
Clear AVX, AVX2 features along with clearing XSAVE feature bits, as part of the parsing "noxsave" parameter. Fixes the kernel boot panic with "noxsave" boot parameter. We could have checked cpu_has_osxsave along with cpu_has_avx etc, but Peter mentioned clearing the feature bits will be better for uses like static_cpu_has() etc. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1343755754.2041.2.camel@sbsiddha-desk.sc.intel.com Cc: <stable@vger.kernel.org> # v3.5 Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 07 8月, 2012 4 次提交
-
-
由 Borislav Petkov 提交于
Run the mprotect.c microbenchmark on all our families >= K8 and preset the flushall shift variable accordingly. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-5-git-send-email-bp@amd64.orgSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
Read I- and DTLB entries count from CPUID on AMD. Handle all the different family-specific cases. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-4-git-send-email-bp@amd64.orgSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
Push the max CPUID leaf check into the ->detect_tlb function and remove general test case from the generic path. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-3-git-send-email-bp@amd64.orgAcked-by: NAlex Shi <alex.shi@intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
The TLB characteristics appeared like this in dmesg: [ 0.065817] Last level iTLB entries: 4KB 512, 2MB 1024, 4MB 512 [ 0.065817] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 0.065817] tlb_flushall_shift is 0xffffffff where tlb_flushall_shift is actually -1 but dumped as a hex number. However, the Kconfig option CONFIG_DEBUG_TLBFLUSH and the rest of the code treats this as a signed decimal and states "If you set it to -1, the code flushes the whole TLB unconditionally." So, fix its formatting in accordance with the other references to it. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1344272439-29080-2-git-send-email-bp@amd64.orgAcked-by: NAlex Shi <alex.shi@intel.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 04 8月, 2012 4 次提交
-
-
由 Thomas Gleixner 提交于
No point in having double cases if we can simply mask the FROZEN bit out. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Thomas Gleixner 提交于
Split timer init function into the init and the start part, so the start part can replace the open coded version in CPU_DOWN_FAILED. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Acked-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Thomas Gleixner 提交于
raise_mce() fiddles with global state, but lacks any kind of serialization. Add a mutex around the raise_mce() call, so concurrent writers do not stomp on each other toes. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Thomas Gleixner 提交于
raise_mce() has a code path which does not disable preemption when the raise_local() is called. The per cpu variable access in raise_local() depends on preemption being disabled to be functional. So that code path was either never tested or never tested with CONFIG_DEBUG_PREEMPT enabled. Add the missing preempt_disable/enable() pair around the call. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChen Gong <gong.chen@linux.intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 31 7月, 2012 2 次提交
-
-
由 Peter Zijlstra 提交于
Some PMUs don't provide a full register set for their sample, specifically 'advanced' PMUs like AMD IBS and Intel PEBS which provide 'better' than regular interrupt accuracy. In this case we use the interrupt regs as basis and over-write some fields (typically IP) with different information. The perf core however uses user_mode() to distinguish user/kernel samples, user_mode() relies on regs->cs. If the interrupt skid pushed us over a boundary the new IP might not be in the same domain as the interrupt. Commit ce5c1fe9 ("perf/x86: Fix USER/KERNEL tagging of samples") tried to fix this by making the perf core use kernel_ip(). This however is wrong (TM), as pointed out by Linus, since it doesn't allow for VM86 and non-zero based segments in IA32 mode. Therefore, provide a new helper to set the regs->ip field, set_linear_ip(), which massages the regs into a suitable state assuming the provided IP is in fact a linear address. Also modify perf_instruction_pointer() and perf_callchain_user() to deal with segments base offsets. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1341910954.3462.102.camel@twinsSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andrew Morton 提交于
i386 allmodconfig: arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function 'uncore_pmu_hrtimer': arch/x86/kernel/cpu/perf_event_intel_uncore.c:728: warning: integer overflow in expression arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function 'uncore_pmu_start_hrtimer': arch/x86/kernel/cpu/perf_event_intel_uncore.c:735: warning: integer overflow in expression Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Zheng Yan <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-h84qlqj02zrojmxxybzmy9hi@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-