- 22 7月, 2011 2 次提交
-
-
由 Robert Richter 提交于
copy_from_user_nmi() is used in oprofile and perf. Moving it to other library functions like copy_from_user(). As this is x86 code for 32 and 64 bits, create a new file usercopy.c for unified code. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110607172413.GJ20052@erda.amd.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cyrill Gorcunov 提交于
This patch: - fixes typos in comments and clarifies the text - renames obscure p4_event_alias::original and ::alter members to ::original and ::alternative as appropriate - drops parenthesis from the return of p4_get_alias_event() No functional changes. Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Link: http://lkml.kernel.org/r/20110721160625.GX7492@sunSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 7月, 2011 1 次提交
-
-
由 Len Brown 提交于
Fix the printk_once() so that it actually prints (didn't print before due to a stray comma.) [ hpa: changed to an incremental patch and adjusted the description accordingly. ] Signed-off-by: NLen Brown <len.brown@intel.com> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107151732480.18606@x980 Cc: <table@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 15 7月, 2011 2 次提交
-
-
由 Cyrill Gorcunov 提交于
Instead of hw_nmi_watchdog_set_attr() weak function and appropriate x86_pmu::hw_watchdog_set_attr() call we introduce even alias mechanism which allow us to drop this routines completely and isolate quirks of Netburst architecture inside P4 PMU code only. The main idea remains the same though -- to allow nmi-watchdog and perf top run simultaneously. Note the aliasing mechanism applies to generic PERF_COUNT_HW_CPU_CYCLES event only because arbitrary event (say passed as RAW initially) might have some additional bits set inside ESCR register changing the behaviour of event and we can't guarantee anymore that alias event will give the same result. P.S. Thanks a huge to Don and Steven for for testing and early review. Acked-by: NDon Zickus <dzickus@redhat.com> Tested-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> CC: Ingo Molnar <mingo@elte.hu> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> CC: Stephane Eranian <eranian@google.com> CC: Lin Ming <ming.m.lin@intel.com> CC: Arnaldo Carvalho de Melo <acme@redhat.com> CC: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/20110708201712.GS23657@sunSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Len Brown 提交于
Since 2.6.36 (23016bf0), Linux prints the existence of "epb" in /proc/cpuinfo, Since 2.6.38 (d5532ee7), the x86_energy_perf_policy(8) utility has been available in-tree to update MSR_IA32_ENERGY_PERF_BIAS. However, the typical BIOS fails to initialize the MSR, presumably because this is handled by high-volume shrink-wrap operating systems... Linux distros, on the other hand, do not yet invoke x86_energy_perf_policy(8). As a result, WSM-EP, SNB, and later hardware from Intel will run in its default hardware power-on state (performance), which assumes that users care for performance at all costs and not for energy efficiency. While that is fine for performance benchmarks, the hardware's intended default operating point is "normal" mode... Initialize the MSR to the "normal" by default during kernel boot. x86_energy_perf_policy(8) is available to change the default after boot, should the user have a different preference. Signed-off-by: NLen Brown <len.brown@intel.com> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107140051020.18606@x980Acked-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@kernel.org>
-
- 09 7月, 2011 1 次提交
-
-
由 Anupam Chanda 提交于
Detect Xen before HyperV because in Viridian compatibility mode Xen presents itself as HyperV. Move Xen to the top since it seems more likely that Xen would emulate VMware than vice versa. Signed-off-by: NAnupam Chanda <achanda@nicira.com> Link: http://lkml.kernel.org/r/1310150570-26810-1-git-send-email-achanda@nicira.comAcked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Reviewed-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 02 7月, 2011 1 次提交
-
-
由 Sergei Shtylyov 提交于
This code uses PCI_CLASS_REVISION instead of PCI_REVISION_ID, so it wasn't converted by commit 44c10138 ("PCI: Change all drivers to use pci_device->revision") before being moved to arch/x86/... Do it now at last -- and save one level of indentation... Signed-off-by: NSergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/201107012242.08347.sshtylyov@ru.mvista.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 7月, 2011 10 次提交
-
-
由 Avi Kivity 提交于
The v1 PMU does not have any fixed counters. Using the v2 constraints, which do have fixed counters, causes an additional choice to be present in the weight calculation, but not when actually scheduling the event, leading to an event being not scheduled at all. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1309362157-6596-3-git-send-email-avi@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Add a NODE level to the generic cache events which is used to measure local vs remote memory accesses. Like all other cache events, an ACCESS is HIT+MISS, if there is no way to distinguish between reads and writes do reads only etc.. The below needs filling out for !x86 (which I filled out with unsupported events). I'm fairly sure ARM can leave it like that since it doesn't strike me as an architecture that even has NUMA support. SH might have something since it does appear to have some NUMA bits. Sparc64, PowerPC and MIPS certainly want a good look there since they clearly are NUMA capable. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: David Daney <ddaney@caviumnetworks.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptopSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the OFFCORE registers are fully symmetric, try the other one when the specified one is already in use. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1306141897.18455.8.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
This patch adds Intel Sandy Bridge offcore_response support by providing the low-level constraint table for those events. On Sandy Bridge, there are two offcore_response events. Each uses its own dedictated extra register. But those registers are NOT shared between sibling CPUs when HT is on unlike Nehalem/Westmere. They are always private to each CPU. But they still need to be controlled within an event group. All events within an event group must use the same value for the extra MSR. That's not controlled by the second patch in this series. Furthermore on Sandy Bridge, the offcore_response events have NO counter constraints contrary to what the official documentation indicates, so drop the events from the contraint table. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110606145712.GA7304@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
The validate_group() function needs to validate events with extra shared regs. Within an event group, only events with the same value for the extra reg can co-exist. This was not checked by validate_group() because it was missing the shared_regs logic. This patch changes the allocation of the fake cpuc used for validation to also point to a fake shared_regs structure such that group events be properly testing. It modifies __intel_shared_reg_get_constraints() to use spin_lock_irqsave() to avoid lockdep issues. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110606145708.GA7279@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Stephane Eranian 提交于
This patch improves the code managing the extra shared registers used for offcore_response events on Intel Nehalem/Westmere. The idea is to use static allocation instead of dynamic allocation. This simplifies greatly the get and put constraint routines for those events. The patch also renames per_core to shared_regs because the same data structure gets used whether or not HT is on. When HT is off, those events still need to coordination because they use a extra MSR that has to be shared within an event group. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110606145703.GA7258@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since only samples call perf_output_sample() its much saner (and more correct) to put the sample logic in there than in the perf_output_begin()/perf_output_end() pair. Saves a useless argument, reduces conditionals and shrinks struct perf_output_handle, win! Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-2crpvsx3cqu67q3zqjbnlpsc@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
The nmi parameter indicated if we could do wakeups from the current context, if not, we would set some state and self-IPI and let the resulting interrupt do the wakeup. For the various event classes: - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from the PMI-tail (ARM etc.) - tracepoint: nmi=0; since tracepoint could be from NMI context. - software: nmi=[0,1]; some, like the schedule thing cannot perform wakeups, and hence need 0. As one can see, there is very little nmi=1 usage, and the down-side of not using it is that on some platforms some software events can have a jiffy delay in wakeup (when arch_irq_work_raise isn't implemented). The up-side however is that we can remove the nmi parameter and save a bunch of conditionals in fast paths. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Don Zickus <dzickus@redhat.com> Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cyrill Gorcunov 提交于
Due to restriction and specifics of Netburst PMU we need a separated event for NMI watchdog. In particular every Netburst event consumes not just a counter and a config register, but also an additional ESCR register. Since ESCR registers are grouped upon counters (i.e. if ESCR is occupied for some event there is no room for another event to enter until its released) we need to pick up the "least" used ESCR (or the most available one) for nmi-watchdog purposes -- so MSR_P4_CRU_ESCR2/3 was chosen. With this patch nmi-watchdog and perf top should be able to run simultaneously. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> CC: Lin Ming <ming.m.lin@intel.com> CC: Arnaldo Carvalho de Melo <acme@redhat.com> CC: Frederic Weisbecker <fweisbec@gmail.com> Tested-and-reviewed-by: NDon Zickus <dzickus@redhat.com> Tested-and-reviewed-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110623124918.GC13050@sunSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
Before check_fpu() is called, we have cr0.TS bit set and hence the floating point code to check the FDIV bug was generating a DNA exception. Use kernel_fpu_begin()/kernel_fpu_end() around the floating point code to avoid this unnecessary device not available exception during boot. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1309479572.2665.1372.camel@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 28 6月, 2011 2 次提交
-
-
由 Suresh Siddha 提交于
MTRR rendezvous sequence is not implemened using stop_machine() before, as this gets called both from the process context aswell as the cpu online paths (where the cpu has not come online and the interrupts are disabled etc). Now that we have a new stop_machine_from_inactive_cpu() API, use it for rendezvous during mtrr init of a logical processor that is coming online. For the rest (runtime MTRR modification, system boot, resume paths), use stop_machine() to implement the rendezvous sequence. This will consolidate and cleanup the code. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Suresh Siddha 提交于
MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially happen in parallel with another system wide rendezvous using stop_machine(). This can lead to deadlock (The order in which works are queued can be different on different cpu's. Some cpu's will be running the first rendezvous handler and others will be running the second rendezvous handler. Each set waiting for the other set to join for the system wide rendezvous, leading to a deadlock). MTRR rendezvous sequence is not implemented using stop_machine() as this gets called both from the process context aswell as the cpu online paths (where the cpu has not come online and the interrupts are disabled etc). stop_machine() works with only online cpus. For now, take the stop_machine mutex in the MTRR rendezvous sequence that gets called from an online cpu (here we are in the process context and can potentially sleep while taking the mutex). And the MTRR rendezvous that gets triggered during cpu online doesn't need to take this stop_machine lock (as the stop_machine() already ensures that there is no cpu hotplug going on in parallel by doing get_online_cpus()) TBD: Pursue a cleaner solution of extending the stop_machine() infrastructure to handle the case where the calling cpu is still not online and use this for MTRR rendezvous sequence. fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008Reported-by: NVadim Kotelnikov <vadimuzzz@inbox.ru> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 16 6月, 2011 12 次提交
-
-
由 Hidetoshi Seto 提交于
There are many functions named mce_* so use a new prefix for the subset of functions related to sysfs support. And since f3c6ea1b introduces syscore_ops, use the prefix mce_syscore for some functions related to power management which were in sysdev_class before. Before: After: mce_device mce_sysdev mce_sysclass mce_sysdev_class mce_attrs mce_sysdev_attrs mce_dev_initialized mce_sysdev_initialized mce_create_device mce_sysdev_create mce_remove_device mce_sysdev_remove mce_suspend mce_syscore_suspend mce_shutdown mce_syscore_shutdown mce_resume mce_syscore_resume Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED81B.8020506@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
There are many functions named mce_* so use a new prefix for the subset of functions dealing with the character device /dev/mcelog. This change doesn't impact the mce-inject module because the exported symbol mce_chrdev_ops already has the prefix, therefore it is left unchanged. Before: After: mce_wait mce_chrdev_wait mce_state_lock mce_chrdev_state_lock open_count mce_chrdev_open_count open_exclu mce_chrdev_open_exclu mce_open mce_chrdev_open mce_release mce_chrdev_release mce_read_mutex mce_chrdev_read_mutex mce_read mce_chrdev_read mce_poll mce_chrdev_poll mce_ioctl mce_chrdev_ioctl mce_log_device mce_chrdev_device Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED7CD.3040500@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
Use a temporary local variable m to simplify the code. No change in logic. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED7A8.8020307@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
Use temporary local variable sysdev to simplify the code. No change in logic. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED777.7080205@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
Because "ancient CPUs" like p5 and winchip don't have X86_FEATURE_MCA (I suppose so), mcheck_cpu_init() on such CPUs will return at check of mce_available() after __mcheck_cpu_ancient_init(). It is hard to know this implicit behavior without knowing the CPUs well. So make it clear that we leave mcheck_cpu_init() when the CPU is initialized in __mcheck_cpu_ancient_init(). Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED74B.20502@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
This patch introduces mce_gather_info() which is to be called at the beginning of error handling and gathers minimum error information from proper error registers (and saved registers). As the result of mce_get_rip() is integrated, unnecessary zeroing is removed. This also takes care of saving RIP which is required to make some decision about error severity for SRAR errors, instead of retrieving it later in the handler. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED71A.1060906@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
Follow other MCi register defines. Plus define MCI_MISC_ADDR_LSB() and MCI_MISC_ADDR_MODE(). Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED6E8.9090509@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
The MCE handler uses a special vector for self IPI to invoke post-emergency processing in an interrupt context, e.g. call an NMI-unsafe function, wakeup loggers, schedule time-consuming work for recovery, etc. This mechanism is now generalized by the following commit: > e360adbe > Author: Peter Zijlstra <a.p.zijlstra@chello.nl> > Date: Thu Oct 14 14:01:34 2010 +0800 > > irq_work: Add generic hardirq context callbacks > > Provide a mechanism that allows running code in IRQ context. It is > most useful for NMI code that needs to interact with the rest of the > system -- like wakeup a task to drain buffers. : So change to use provided generic mechanism. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED6B2.6080005@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
More specifically: - sort bits in the macros - use BITCLR/BITSET - coordinate message pattern - use m for struct mce - cleanup for severities_debugfs_init() No functional change. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED679.9090503@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
The current format of an item in this table is: condition(param, ..., level, message [, condition2 ...]) So we have to check both an item's head and tail to find the conditions which match the item. Format them in a more straight forward manner: item(level, message, condition [, condition2 ...]) Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED61F.5010502@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Hidetoshi Seto 提交于
The table looks very complicated and hard to read for people other than skilled developers. So let's clean it up a bit. At first, change format to ease reading elements in the table. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Acked-by: NTony Luck <tony.luck@intel.com> Link: http://lkml.kernel.org/r/4DEED5EB.6050400@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 Tony Luck 提交于
The "Spurious not enabled" entry is redundant: the "Not enabled" entry earlier in the table will cover this case. The "Action required; unknown MCACOD" entry shouldn't specify MCACOD in the .mask field. Current code will only match for mcacod==0 rather than all AR=1 entries. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Link: http://lkml.kernel.org/r/4DEED5BC.8030703@jp.fujitsu.comSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
- 29 5月, 2011 2 次提交
-
-
由 Len Brown 提交于
We'd rather that modern machines not check if HLT works on every entry into idle, for the benefit of machines that had marginal electricals 15-years ago. If those machines are still running the upstream kernel, they can use "idle=poll". The only difference will be that they'll now invoke HLT in machine_hlt(). cc: x86@kernel.org # .39.x Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Len Brown 提交于
The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting: 1. Intel C1E is somehow involved 2. All AMD processors with C1E are involved Use the string "amd_c1e" instead of simply "c1e" to clarify that this workaround is specific to AMD's version of C1E. Use the string "e400" to clarify that the workaround is specific to AMD processors with Erratum 400. This patch is text-substitution only, with no functional change. cc: x86@kernel.org Acked-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 27 5月, 2011 1 次提交
-
-
由 Boris Ostrovsky 提交于
Commit b87cf80a added support for ARAT (Always Running APIC timer) on AMD processors that are not affected by erratum 400. This erratum is present on certain processor families and prevents APIC timer from waking up the CPU when it is in a deep C state, including C1E state. Determining whether a processor is affected by this erratum may have some corner cases and handling these cases is somewhat complicated. In the interest of simplicity we won't claim ARAT support on processor families below 0x12 and will go back to broadcasting timer when going idle. Signed-off-by: NBoris Ostrovsky <ostr@amd64.org> Link: http://lkml.kernel.org/r/1306423192-19774-1-git-send-email-ostr@amd64.orgTested-by: NBoris Petkov <borislav.petkov@amd.com> Cc: Hans Rosenfeld <Hans.Rosenfeld@amd.com> Cc: Andreas Herrmann <Andreas.Herrmann3@amd.com> Cc: Chuck Ebbert <cebbert@redhat.com> Cc: stable@kernel.org # 32.x, 38.x, 39.x Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 26 5月, 2011 1 次提交
-
-
由 Nikhil P Rao 提交于
This patch removes a check that causes incorrect scheduler domain setup (SMP instead of SMT) and bootlog warning messages when cpuid extensions for topology enumeration are not supported and the number of processors reported to the OS is smaller than smp_num_siblings. Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NNikhil P Rao <nikhil.rao@intel.com> Link: http://lkml.kernel.org/r/1306343921.19325.1.camel@fedora13Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 5月, 2011 1 次提交
-
-
由 Linus Torvalds 提交于
The setup_smep function gets calle at resume time too, and is thus not a pure __init function. When marked as __init, it gets thrown out after the kernel has initialized, and when the kernel is suspended and resumed, the code will no longer be around, and we'll get a nice "kernel tried to execute NX-protected page" oops because the page is no longer marked executable. Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: "H. Peter Anvin" <hpa@linux.intel.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 5月, 2011 1 次提交
-
-
由 Fenghua Yu 提交于
Fix these kernel compilation warnings: WARNING: arch/x86/built-in.o(.cpuinit.text+0x1e07): Section mismatch ... WARNING: arch/x86/built-in.o(.cpuinit.text+0x1b10): Section mismatch ... introduced by: de5397ad: x86, cpu: Enable/disable Supervisor Mode Execution Protection Change disable_smep from __initdata to __cpuinitdata. Change setup_smep() from __init to __cpuinit. Reported-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Cc: Asit K Mallick <asit.k.mallick@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1305930797-11409-1-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 5月, 2011 2 次提交
-
-
由 Roedel, Joerg 提交于
The workaround for Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=33012 introduced a read and a write to the MC4 mask msr. Unfortunatly this MSR is not emulated by the KVM hypervisor so that the kernel will get a #GP and crashes when applying this workaround when running inside KVM. This issue was reported as: https://bugzilla.kernel.org/show_bug.cgi?id=35132 and is fixed with this patch. The change just let the kernel ignore any #GP it gets while accessing this MSR by using the _safe msr access methods. Reported-by: NTörök Edwin <edwintorok@gmail.com> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> # .39.x Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Dave Jones 提交于
Signed-off-by: NDave Jones <davej@redhat.com>
-
- 18 5月, 2011 1 次提交
-
-
由 Fenghua Yu 提交于
Enable/disable newly documented SMEP (Supervisor Mode Execution Protection) CPU feature in kernel. CR4.SMEP (bit 20) is 0 at power-on. If the feature is supported by CPU (X86_FEATURE_SMEP), enable SMEP by setting CR4.SMEP. New kernel option nosmep disables the feature even if the feature is supported by CPU. [ hpa: moved the call to setup_smep() until after the vendor-specific initialization; that ensures that CPUID features are unmasked. We will still run it before we have userspace (never mind uncontrolled userspace). ] Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1305157865-31727-1-git-send-email-fenghua.yu@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-