- 08 6月, 2012 2 次提交
-
-
由 Borislav Petkov 提交于
Now that all users of {rd,wr}msr_amd_safe have been fixed, deprecate its use by making them private to amd.c and adding warnings when used on anything else beside K8. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/1338562358-28182-5-git-send-email-bp@amd64.orgAcked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Andre Przywara 提交于
There were paravirt_ops hooks for the full register set variant of {rd,wr}msr_safe which are actually not used by anyone anymore. Remove them to make the code cleaner and avoid silent breakages when the pvops members were uninitialized. This has been boot-tested natively and under Xen with PVOPS enabled and disabled on one machine. Signed-off-by: NAndre Przywara <andre.przywara@amd.com> Link: http://lkml.kernel.org/r/1338562358-28182-2-git-send-email-bp@amd64.orgAcked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 20 4月, 2012 1 次提交
-
-
由 H. Peter Anvin 提交于
This reverts commit ce37defc "x86: Document rdmsr_safe restrictions", as these restrictions no longer apply. Reported-by: NBorislav Petkov <borislav.petkov@amd.com> Link: http://lkml.kernel.org/r/20120419171609.GH3221@aftab.osrc.amd.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 05 12月, 2011 1 次提交
-
-
由 Borislav Petkov 提交于
Recently, I got bitten by using rdmsr_safe too early in the boot process. Document its shortcomings for future reference. Link: http://lkml.kernel.org/r/4ED5B70F.606@lwfinger.netSigned-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
- 21 7月, 2010 1 次提交
-
-
由 Andi Kleen 提交于
Avoids quite a lot of warnings with a gcc 4.6 -Wall build because this happens in a commonly used header file (apic.h) Signed-off-by: NAndi Kleen <ak@linux.intel.com> LKML-Reference: <201007202219.o6KMJme6021066@imap1.linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 17 12月, 2009 1 次提交
-
-
由 Borislav Petkov 提交于
Randy Dunlap reported the following build error: "When CONFIG_SMP=n, CONFIG_X86_MSR=m: ERROR: "msrs_free" [drivers/edac/amd64_edac_mod.ko] undefined! ERROR: "msrs_alloc" [drivers/edac/amd64_edac_mod.ko] undefined!" This is due to the fact that <arch/x86/lib/msr.c> is conditioned on CONFIG_SMP and in the UP case we have only the stubs in the header. Fork off SMP functionality into a new file (msr-smp.c) and build msrs_{alloc,free} unconditionally. Reported-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NBorislav Petkov <petkovbb@gmail.com> LKML-Reference: <20091216231625.GD27228@liondog.tnic> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 16 12月, 2009 1 次提交
-
-
由 Sheng Yang 提交于
Clean up write_tsc() and write_tscp_aux() by replacing hardcoded values. No change in functionality. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> LKML-Reference: <1260942485-19156-4-git-send-email-sheng@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 12月, 2009 1 次提交
-
-
由 Borislav Petkov 提交于
The current rd/wrmsr_on_cpus helpers assume that the supplied cpumasks are contiguous. However, there are machines out there like some K8 multinode Opterons which have a non-contiguous core enumeration on each node (e.g. cores 0,2 on node 0 instead of 0,1), see http://www.gossamer-threads.com/lists/linux/kernel/1160268. This patch fixes out-of-bounds writes (see URL above) by adding per-CPU msr structs which are used on the respective cores. Additionally, two helpers, msrs_{alloc,free}, are provided for use by the callers of the MSR accessors. Cc: H. Peter Anvin <hpa@zytor.com> Cc: Mauro Carvalho Chehab <mchehab@redhat.com> Cc: Aristeu Rozanski <aris@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Doug Thompson <dougthompson@xmission.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20091211171440.GD31998@aftab> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 08 11月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
This makes the declarations match the definitions, which already use 'struct cpumask'. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <200911052245.41803.rusty@rustcorp.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 9月, 2009 1 次提交
-
-
由 Borislav Petkov 提交于
Since rdmsr_on_cpus and wrmsr_on_cpus are almost identical, unify them into a common __rwmsr_on_cpus helper thus avoiding code duplication. While at it, convert cpumask_t's to const struct cpumask *. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 9月, 2009 5 次提交
-
-
由 H. Peter Anvin 提交于
Make it possible to access the all-register-setting/getting MSR functions via the MSR driver. This is implemented as an ioctl() on the standard MSR device node. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Borislav Petkov <petkovbb@gmail.com>
-
由 H. Peter Anvin 提交于
Create _on_cpu helpers for {rw,wr}msr_safe_regs() analogously with the other MSR functions. This will be necessary to add support for these to the MSR driver. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Borislav Petkov <petkovbb@gmail.com>
-
由 H. Peter Anvin 提交于
For some reason, the _safe MSR functions returned -EFAULT, not -EIO. However, the only user which cares about the return code as anything other than a boolean is the MSR driver, which wants -EIO. Change it to -EIO across the board. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Alok Kataria <akataria@vmware.com> Cc: Rusty Russell <rusty@rustcorp.com.au>
-
由 Borislav Petkov 提交于
Switch them to native_{rd,wr}msr_safe_regs and remove pv_cpu_ops.read_msr_amd. Signed-off-by: NBorislav Petkov <petkovbb@gmail.com> LKML-Reference: <1251705011-18636-2-git-send-email-petkovbb@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
native_{rdmsr,wrmsr}_safe_regs are two new interfaces which allow presetting of a subset of eight x86 GPRs before executing the rd/wrmsr instructions. This is needed at least on AMD K8 for accessing an erratum workaround MSR. Originally based on an idea by H. Peter Anvin. Signed-off-by: NBorislav Petkov <petkovbb@gmail.com> LKML-Reference: <1251705011-18636-1-git-send-email-petkovbb@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 18 6月, 2009 1 次提交
-
-
由 Jaswinder Singh Rajput 提交于
<linux/types.h> is only required for __KERNEL__ as whole file is covered with it Also fixed some spacing issues for usr/include/asm-x86/msr.h Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com> Cc: "H. Peter Anvin" <hpa@kernel.org> LKML-Reference: <1245228070.2662.1.camel@ht.satnam> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 6月, 2009 2 次提交
-
-
由 Borislav Petkov 提交于
Provide for concurrent MSR writes on all the CPUs in the cpumask. Also, add a temporary workaround for smp_call_function_many which skips the CPU we're executing on. Bart: zero out rv struct which is allocated on stack. CC: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
由 Borislav Petkov 提交于
Add a struct representing a 64bit MSR pair consisting of a low and high register part and convert msr_info to use it. Also, rename msr-on-cpu.c to msr.c. Side note: Put the cpumask.h include in __KERNEL__ space thus fixing an allmodconfig build failure in the headers_check target. CC: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
- 25 12月, 2008 1 次提交
-
-
由 Frederic Weisbecker 提交于
Impact: fix a crash/hard-reboot on certain configs while enabling cpu runtime On some archs, the boot of a secondary cpu can have an early fragile state. On x86-64, the pda is not initialized on the first stage of a cpu boot but it is needed to get the cpu number and the current task pointer. This data is needed during tracing. As they were dereferenced at this stage, we got a crash while tracing a cpu being enabled at runtime. Some other archs like ia64 can have such kind of issue too. Changes on v2: We dropped the previous solution of a per-arch called function to guess the current state of a cpu. That could slow down the tracing. This patch removes the -pg flag on arch/x86/kernel/cpu/common.c where the low level cpu boot functions exist, on start_secondary() and a helper function used at this stage. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 12月, 2008 1 次提交
-
-
由 Ken Chen 提交于
Impact: micro-optimization Is there any reason why x86 rdtscll have to use the out of line function instead of inline __native_read_tsc()? native_read_tsc and __native_read_tsc is essentially the same functions. Patch to let x86 rdtscll() to use the inline version of read_tsc. Signed-off-by: NKen Chen <kenchen@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 11月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
in scheduler-intense workloads native_read_tsc() overhead accounts for 20% of the system overhead: 659567 system_call 41222.9375 686796 schedule 435.7843 718382 __switch_to 665.1685 823875 switch_mm 4526.7857 1883122 native_read_tsc 55385.9412 9761990 total 2.8468 this is large part due to the rdtsc_barrier() that is done before and after reading the TSC. But sched_clock() is not a precise clock in the GTOD sense, using such barriers is completely pointless. So remove the barriers and only use them in vget_cycles(). This improves lat_ctx performance by about 5%. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 28 10月, 2008 1 次提交
-
-
由 Jike Song 提交于
The rdmsr instruction(et al) for i386 and x86-64 are semantically same. The only difference is how gcc interpret constraint "A" for these targets. Signed-off-by: NJike Song <albcamus@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 26 8月, 2008 2 次提交
-
-
由 H. Peter Anvin 提交于
Impact: bogus error codes (+other?) on x86-64 The rdmsr_safe/wrmsr_safe routines have macros for the handling of the edx:eax arguments. Those macros take a variable number of assembly arguments. This is rather inherently incompatible with using %digit-style escapes in the inline assembly; replace those with %[name]-style escapes. This fixes miscompilation on x86-64, which at the very least caused bogus return values. It is possible that this could also corrupt the return value; I am not sure. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 H. Peter Anvin 提交于
Propagate error (-ENXIO) from smp_call_function_single(). These errors can happen when a CPU is unplugged while the MSR driver is open. Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 22 8月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
commandline show_msr=1 for bsp, show_msr=32 for all 32 cpus. [ mingo@elte.hu: added documentation ] Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 7月, 2008 1 次提交
-
-
由 Vegard Nossum 提交于
This patch is the result of an automatic script that consolidates the format of all the headers in include/asm-x86/. The format: 1. No leading underscore. Names with leading underscores are reserved. 2. Pathname components are separated by two underscores. So we can distinguish between mm_types.h and mm/types.h. 3. Everything except letters and numbers are turned into single underscores. Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
-
- 08 7月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
wrmsr is a special instruction which can have arbitrary system-wide effects. We don't want the compiler to reorder it with respect to memory operations, so make it a memory barrier. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 6月, 2008 1 次提交
-
-
由 Max Asbock 提交于
native_read_tscp shifts the bits in the high order value in the wrong direction, the attached patch fixes that. Signed-off-by: NMax Asbock <masbock@linux.vnet.ibm.com> Acked-by: NGlauber Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 4月, 2008 2 次提交
-
-
由 Andi Kleen 提交于
RDMSR for 64bit values with exception handling. Makes it easier to deal with 64bit valued MSRs. The old 64bit code base had that too as checking_rdmsrl(), but it got dropped somehow. Signed-off-by: NAndi Kleen <andi@firstfloor.org> Cc: andreas.herrmann3@amd.com Cc: mingo@elte.hu Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 2月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
Use the _ASM_EXTABLE macro from <asm/asm.h>, instead of open-coding __ex_table entires in include/asm-x86/msr.h. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 7 次提交
-
-
由 Ingo Molnar 提交于
[ andi@firstfloor.org: build fix ] Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
move native_read_tsc() offline. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
write_tsc() does not need to be enclosed in any paravirt closure, as it uses wrmsr(). So we rip off the duplicate in msr.h and the definition from paravirt.h Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andrew Morton 提交于
arch/x86/vdso/vgetcpu.c: In function '__vdso_getcpu': arch/x86/vdso/vgetcpu.c:22: warning: pointer targets in passing argument 1 of 'native_read_tscp' differ in signedness Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patches proceeds with the integration of msr.h, making the code unified, instead of having a version for each architecture. We stick with the native_* functions, and then paravirt comes for free. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patch uses the _ASM_ALIGN and _ASM_PTR macros to make the fixups in native_read/write_msr_safe look the same for x86_64 and i386. Besides using this macros, we also have to take the explicit instruction suffixes out. It's okay because all this instructions uses registers, and can be sized by them. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
This patche changes the native_write_msr() and friends interface to explicitly take 2 32-bit registers instead of a 64-bit value. The change will ease the merge with 64-bit code. As the 64-bit value will be passed as two registers anyway in i386, the PVOP_CALL interface has to account for that and use low/high parameters It would force the x86_64 version to be different. The change does not make i386 generated code less efficient. As said above, it would get the values from two registers anyway. Signed-off-by: NGlauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-