- 17 5月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
In preparation for removing volatile from the atomic_t definition, this patch adds a volatile cast to all the atomic read functions. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 5月, 2010 1 次提交
-
-
由 Borislav Petkov 提交于
K8_NB depends on PCI and when the last is disabled (allnoconfig) we fail at the final linking stage due to missing exported num_k8_northbridges. Add a header stub for that. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100503183036.GJ26107@aftab> Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: <stable@kernel.org>
-
- 14 5月, 2010 1 次提交
-
-
由 Roland McGrath 提交于
The newer assemblers support the .cfi_sections directive so we can put the CFI from .S files into the .debug_frame section that is preserved in unstripped vmlinux and in separate debuginfo, rather than the .eh_frame section that is now discarded by vmlinux.lds.S. Signed-off-by: NRoland McGrath <roland@redhat.com> LKML-Reference: <20100514044303.A6FE7400BE@magilla.sf.frob.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 12 5月, 2010 2 次提交
-
-
由 H. Peter Anvin 提交于
use_xsave() is now just a special case of static_cpu_has(), so use static_cpu_has(). Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com>
-
由 H. Peter Anvin 提交于
For CPU-feature-specific code that touches performance-critical paths, introduce a static patching version of [boot_]cpu_has(). This is run at alternatives time and is therefore not appropriate for most initialization code, but on the other hand initialization code is generally not performance critical. On gcc 4.5+ this uses the new "asm goto" feature. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com>
-
- 11 5月, 2010 3 次提交
-
-
由 H. Peter Anvin 提交于
The proper constraint for a receiving 8-bit variable is "=qm", not "=g" which equals "=rim"; even though the "i" will never match, bugs can and do happen due to the difference between "q" and "r". Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Avi Kivity <avi@redhat.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com>
-
由 Avi Kivity 提交于
Currently all fpu state access is through tsk->thread.xstate. Since we wish to generalize fpu access to non-task contexts, wrap the state in a new 'struct fpu' and convert existing access to use an fpu API. Signal frame handlers are not converted to the API since they will remain task context only things. Signed-off-by: NAvi Kivity <avi@redhat.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-3-git-send-email-avi@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Avi Kivity 提交于
The fpu code currently uses current->thread_info->status & TS_XSAVE as a way to distinguish between XSAVE capable processors and older processors. The decision is not really task specific; instead we use the task status to avoid a global memory reference - the value should be the same across all threads. Eliminate this tie-in into the task structure by using an alternative instruction keyed off the XSAVE cpu feature; this results in shorter and faster code, without introducing a global memory reference. [ hpa: in the future, this probably should use an asm jmp ] Signed-off-by: NAvi Kivity <avi@redhat.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1273135546-29690-2-git-send-email-avi@redhat.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 08 5月, 2010 1 次提交
-
-
由 H. Peter Anvin 提交于
Clean up the hypervisor layer and the hypervisor drivers, using an ops structure instead of an enumeration with if statements. The identity of the hypervisor, if needed, can be tested by testing the pointer value in x86_hyper. The MS-HyperV private state is moved into a normal global variable (it's per-system state, not per-CPU state). Being a normal bss variable, it will be left at all zero on non-HyperV platforms, and so can generally be tested for HyperV-specific features without additional qualification. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NGreg KH <greg@kroah.com> Cc: Hank Janssen <hjanssen@microsoft.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Ky Srinivasan <ksrinivasan@novell.com> LKML-Reference: <4BE49778.6060800@zytor.com>
-
- 07 5月, 2010 1 次提交
-
-
由 Ky Srinivasan 提交于
This patch integrates HyperV detection within the framework currently used by VmWare. With this patch, we can avoid having to replicate the HyperV detection code in each of the Microsoft HyperV drivers. Reworked and tweaked by Greg K-H to build properly. Signed-off-by: NK. Y. Srinivasan <ksrinivasan@novell.com> LKML-Reference: <20100506190841.GA1605@kroah.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Vadim Rozenfeld <vrozenfe@redhat.com> Cc: Avi Kivity <avi@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: "K.Prasad" <prasad@linux.vnet.ibm.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Alan Cox <alan@linux.intel.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Hank Janssen <hjanssen@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 06 5月, 2010 1 次提交
-
-
由 Eric W. Biederman 提交于
My recent changes introducing a global gsi_end variable failed to take into account the case of using acpi on a system not built to support IO_APICs, causing the build to fail. Define gsi_end to 15 when CONFIG_X86_IO_APIC is not set to avoid compile errors. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Cc: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <m1tyqm14la.fsf_-_@fess.ebiederm.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 5月, 2010 5 次提交
-
-
由 Eric W. Biederman 提交于
Now that the generic irq layer is performing the exact same remapping as io_apic_renumber_irq we can kill this weird es7000 specific function. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <1269936436-7039-15-git-send-email-ebiederm@xmission.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Eric W. Biederman 提交于
Use the global gsi_end value now that all ioapics have valid gsi numbers instead of a combination of acpi_probe_gsi and walking all of the ioapics and couting their number of entries by hand if acpi_probe_gsi gave us an answer we did not like. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <1269936436-7039-13-git-send-email-ebiederm@xmission.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Eric W. Biederman 提交于
Add the global variable gsi_end and teach mp_register_ioapic to keep it uptodate as we add more ioapics into the system. ioapics can only be added early in boot so the code that runs later can treat gsi_end as a constant. Remove the have hacks in sfi.c to second guess mp_register_ioapic by keeping t's own running total of how many gsi's have been seen, and instead use the gsi_end. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <1269936436-7039-9-git-send-email-ebiederm@xmission.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Eric W. Biederman 提交于
This patches fixes the types of gsi_base and gsi_end values in struct mp_ioapic_gsi, and the gsi parameter of mp_find_ioapic and mp_find_ioapic_pin A gsi is cannonically a u32, not an int. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <1269936436-7039-8-git-send-email-ebiederm@xmission.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Eric W. Biederman 提交于
Multiple declarations of the same function in different headers is a pain to maintain. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <1269936436-7039-6-git-send-email-ebiederm@xmission.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 04 5月, 2010 2 次提交
-
-
由 Borislav Petkov 提交于
K8_NB depends on PCI and when the last is disabled (allnoconfig) we fail at the final linking stage due to missing exported num_k8_northbridges. Add a header stub for that. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100503183036.GJ26107@aftab> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Brian Gerst 提交于
The only difference between FPU and SIMD exceptions is where the status bits are read from (cwd/swd vs. mxcsr). This also fixes the discrepency introduced by commit adf77bac, which fixed FPU but not SIMD. Signed-off-by: NBrian Gerst <brgerst@gmail.com> LKML-Reference: <1269176446-2489-3-git-send-email-brgerst@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 01 5月, 2010 2 次提交
-
-
由 Frederic Weisbecker 提交于
The breakpoint generic layer assumes that archs always know in advance the static number of address registers available to host breakpoints through the HBP_NUM macro. However this is not true for every archs. For example Arm needs to get this information dynamically to handle the compatiblity between different versions. To solve this, this patch proposes to drop the static HBP_NUM macro and let the arch provide the number of available slots through a new hw_breakpoint_slots() function. For archs that have CONFIG_HAVE_MIXED_BREAKPOINTS_REGS selected, it will be called once as the number of registers fits for instruction and data breakpoints together. For the others it will be called first to get the number of instruction breakpoint registers and another time to get the data breakpoint registers, the targeted type is given as a parameter of hw_breakpoint_slots(). Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Acked-by: NPaul Mundt <lethal@linux-sh.org> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Cc: K. Prasad <prasad@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Ingo Molnar <mingo@elte.hu>
-
由 Frederic Weisbecker 提交于
The current policies of breakpoints in x86 and SH are the following: - task bound breakpoints can only break on userspace addresses - cpu wide breakpoints can only break on kernel addresses The former rule prevents ptrace breakpoints to be set to trigger on kernel addresses, which is good. But as a side effect, we can't breakpoint on kernel addresses for task bound breakpoints. The latter rule simply makes no sense, there is no reason why we can't set breakpoints on userspace while performing cpu bound profiles. We want the following new policies: - task bound breakpoint can set userspace address breakpoints, with no particular privilege required. - task bound breakpoints can set kernelspace address breakpoints but must be privileged to do that. - cpu bound breakpoints can do what they want as they are privileged already. To implement these new policies, this patch checks if we are dealing with a kernel address breakpoint, if so and if the exclude_kernel parameter is set, we tell the user that the breakpoint is invalid, which makes a good generic ptrace protection. If we don't have exclude_kernel, ensure the user has the right privileges as kernel breakpoints are quite sensitive (risk of trap recursion attacks and global performance impacts). [ Paul Mundt: keep addr space check for sh signal delivery and fix double function declaration] Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Cc: K. Prasad <prasad@linux.vnet.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 30 4月, 2010 2 次提交
-
-
由 Liang Li 提交于
When specifying the 'reservetop=0xbadc0de' kernel parameter, the kernel will stop booting due to a early_ioremap bug that relates to commit 8827247f. The root cause of boot failure problem is the value of 'slot_virt[i]' was initialized in setup_arch->early_ioremap_init(). But later in setup_arch, the function 'parse_early_param' will modify 'FIXADDR_TOP' when 'reservetop=0xbadc0de' being specified. The simplest fix might be use __fix_to_virt(idx0) to get updated value of 'FIXADDR_TOP' in '__early_ioremap' instead of reference old value from slot_virt[slot] directly. Changelog since v0: -v1: When reservetop being handled then FIXADDR_TOP get adjusted, Hence check prev_map then re-initialize slot_virt and PMD based on new FIXADDR_TOP. -v2: place fixup_early_ioremap hence call early_ioremap_init in reserve_top_address to re-initialize slot_virt and corresponding PMD when parse_reservertop -v3: move fixup_early_ioremap out of reserve_top_address to make sure other clients of reserve_top_address like xen/lguest won't broken Signed-off-by: NLiang Li <liang.li@windriver.com> Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NYinghai Lu <yinghai@kernel.org> Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Wang Chen <wangchen@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <1272621711-8683-1-git-send-email-liang.li@windriver.com> [ fixed three small cleanliness details in fixup_early_ioremap() ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 H. Peter Anvin 提交于
Checkin b3ac891b: x86: Add support for lock prefix in alternatives ... did not define LOCK_PREFIX_HERE in the case of a uniprocessor build. As a result, it would cause any of the usages of this macro to fail on a uniprocessor build. Fix this by defining LOCK_PREFIX_HERE as a null string. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Luca Barbieri <luca@luca-barbieri.com> LKML-Reference: <1267005265-27958-2-git-send-email-luca@luca-barbieri.com>
-
- 29 4月, 2010 3 次提交
-
-
由 Jan Beulich 提交于
No functional change intended. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <4BCF2690020000780003B340@vpn.id2.novell.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jan Beulich 提交于
Reduce the SMP locks table size by using relative pointers instead of absolute ones, thus cutting the table size by half. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <4BCF30FE020000780003B3B6@vpn.id2.novell.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jan Beulich 提交于
... generating slightly smaller code. Signed-off-by: NJan Beulich <jbeulich@novell.com> LKML-Reference: <4BCF261F020000780003B33C@vpn.id2.novell.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 19 4月, 2010 1 次提交
-
-
由 Zhang, Yanmin 提交于
Below patch introduces perf_guest_info_callbacks and related register/unregister functions. Add more PERF_RECORD_MISC_XXX bits meaning guest kernel and guest user space. Signed-off-by: NZhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 14 4月, 2010 1 次提交
-
-
由 Rusty Russell 提交于
This is a partial revert of 4cd8b5e2 "lguest: use KVM hypercalls"; we revert to using (just as questionable but more reliable) int $15 for hypercalls. I didn't revert the register mapping, so we still use the same calling convention as kvm. KVM in more recent incarnations stopped injecting a fault when a guest tried to use the VMCALL instruction from ring 1, so lguest under kvm fails to make hypercalls. It was nice to share code with our KVM cousins, but this was overreach. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Matias Zabaljauregui <zabaljauregui@gmail.com> Cc: Avi Kivity <avi@redhat.com>
-
- 10 4月, 2010 1 次提交
-
-
由 Borislav Petkov 提交于
By semi-popular demand, this adds the Core Performance Boost feature flag to /proc/cpuinfo. Possible use case for this is userspace tools like cpufreq-aperf, for example, so that they don't have to jump through hoops of accessing "/dev/cpu/%d/cpuid" in order to check for CPB hw support, or call cpuid from userspace. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com> LKML-Reference: <1270065406-1814-2-git-send-email-bp@amd64.org> Reviewed-by: NThomas Renninger <trenn@suse.de> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 07 4月, 2010 1 次提交
-
-
由 Chris Wright 提交于
To catch future potential issues we can add a warning whenever we issue a command before the command buffer is fully initialized. Signed-off-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
-
- 03 4月, 2010 2 次提交
-
-
由 Robert Richter 提交于
ARCH_PERFMON_EVENTSEL bit masks are often used in the kernel. This patch adds macros for the bit masks and removes local defines. The function intel_pmu_raw_event() becomes x86_pmu_raw_event() which is generic for x86 models and same also for p6. Duplicate code is removed. Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100330092821.GH11907@erda.amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Robert Richter 提交于
The big rename: cdd6c482 perf: Do the big rename: Performance Counters -> Performance Events accidentally renamed some members of stucts that were named after registers in the spec. To avoid confusion this patch reverts some changes. The related specs are MSR descriptions in AMD's BKDGs and the ARCHITECTURAL PERFORMANCE MONITORING section in the Intel 64 and IA-32 Architectures Software Developer's Manuals. This patch does: $ sed -i -e 's:num_events:num_counters:g' \ arch/x86/include/asm/perf_event.h \ arch/x86/kernel/cpu/perf_event_amd.c \ arch/x86/kernel/cpu/perf_event.c \ arch/x86/kernel/cpu/perf_event_intel.c \ arch/x86/kernel/cpu/perf_event_p6.c \ arch/x86/kernel/cpu/perf_event_p4.c \ arch/x86/oprofile/op_model_ppro.c $ sed -i -e 's:event_bits:cntval_bits:g' -e 's:event_mask:cntval_mask:g' \ arch/x86/kernel/cpu/perf_event_amd.c \ arch/x86/kernel/cpu/perf_event.c \ arch/x86/kernel/cpu/perf_event_intel.c \ arch/x86/kernel/cpu/perf_event_p6.c \ arch/x86/kernel/cpu/perf_event_p4.c Signed-off-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1269880612-25800-2-git-send-email-robert.richter@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Including slab.h from x86 pgtable_32.h creates a troublesome dependency chain w/ ftrace enabled. The following chain leads to inclusion of pgtable_32.h from define_trace.h. trace/define_trace.h trace/ftrace.h linux/ftrace_event.h linux/ring_buffer.h linux/mm.h asm/pgtable.h asm/pgtable_32.h slab.h itself defines trace hooks via linux/sl[aou]b_def.h linux/kmemtrace.h trace/events/kmem.h If slab.h is not included before define_trace.h is included, this leads to duplicate definitions of kmemtrace hooks or other include dependency problems. pgtable_32.h doesn't need slab.h to begin with. Don't include it from there. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NPekka Enberg <penberg@cs.helsinki.fi> Acked-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anvin <hpa@zytor.com>
-
- 26 3月, 2010 4 次提交
-
-
由 Peter Zijlstra 提交于
Implement ptrace-block-step using TIF_BLOCKSTEP which will set DEBUGCTLMSR_BTF when set for a task while preserving any other DEBUGCTLMSR bits. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100325135414.017536066@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Support for the PMU's BTS features has been upstreamed in v2.6.32, but we still have the old and disabled ptrace-BTS, as Linus noticed it not so long ago. It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without regard for other uses (perf) and doesn't provide the flexibility needed for perf either. Its users are ptrace-block-step and ptrace-bts, since ptrace-bts was never used and ptrace-block-step can be implemented using a much simpler approach. So axe all 3000 lines of it. That includes the *locked_memory*() APIs in mm/mlock.c as well. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Roland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Markus Metzger <markus.t.metzger@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20100325135413.938004390@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Move all debugctlmsr thingies into msr-index.h Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100325135413.861425293@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cyrill Gorcunov 提交于
The adding of raw event support lead to complete code refactoring. I hope is became more readable then it was. The list of changes: 1) The 64bit config field is enough to hold all information we need to track event details. To achieve it we used *own* enum for events selection in ESCR register and map this key into proper value at moment of event enabling. For the same reason we use 12LSB bits in CCCR register -- to track which exactly cache trace event was requested. And we cear this bits at real 'write' moment. 2) There is no per-cpu area reserved for P4 PMU anymore. We don't need it. All is held by config. 3) Now we may use any available counter, ie we try to grab any possible counter. v2: - Lin Ming reported the lack of ESCR selector in CCCR for cache events v3: - Don't loose cache event codes at config unpacking procedure, we may need it one day so no obscure hack behind our back, better to clear reserved bits explicitly when needed (thanks Ming for pointing out) - Lin Ming fixed misplaced opcodes in cache events Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Tested-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NLin Ming <ming.m.lin@intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1269403766.3409.6.camel@minggr.sh.intel.com> [ v4: did a few whitespace fixlets ] Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 3月, 2010 1 次提交
-
-
由 Andreas Herrmann 提交于
Currently c1e_idle returns true for all CPUs greater than or equal to family 0xf model 0x40. This covers too many CPUs. Meanwhile a respective erratum for the underlying problem was filed (#400). This patch adds the logic to check whether erratum #400 applies to a given CPU. Especially for CPUs where SMI/HW triggered C1e is not supported, c1e_idle() doesn't need to be used. We can check this by looking at the respective OSVW bit for erratum #400. Cc: <stable@kernel.org> # .32.x .33.x Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com> LKML-Reference: <20100319110922.GA19614@alberich.amd.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 19 3月, 2010 3 次提交
-
-
由 Lin Ming 提交于
Index 0-6 in p4_templates are reserved for common hardware events. So p4_templates is arranged as below: 0 - 6: common hardware events 7 - N: cache events N+1 - ...: other raw events Reported-by: NCyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: NLin Ming <ming.m.lin@intel.com> Acked-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1268983738.13901.142.camel@minggr.sh.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Cyrill Gorcunov 提交于
- A few ESCR have escaped fixing at previous attempt. - p4_escr_map is read only, make it const. Nothing serious. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Lin Ming <ming.m.lin@intel.com> LKML-Reference: <20100318211256.GH5062@lenovo> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Lin Ming 提交于
Move the HT bit setting code from p4_pmu_event_map to p4_hw_config. So the cache events can get HT bit set correctly. Tested on my P4 desktop, below 6 cache events work: L1-dcache-load-misses LLC-load-misses dTLB-load-misses dTLB-store-misses iTLB-loads iTLB-load-misses Signed-off-by: NLin Ming <ming.m.lin@intel.com> Reviewed-by: NCyrill Gorcunov <gorcunov@openvz.org> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1268908392.13901.128.camel@minggr.sh.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-