- 13 8月, 2014 8 次提交
-
-
由 Andi Kleen 提交于
This fixes a side effect of Kan's earlier patch to probe the LBRs at boot time. Normally when the LBRs are disabled cycles:pp is disabled too. So for example cycles:pp doesn't work. However this is not needed with PEBSv2 and later (Haswell) because it does not need LBRs to correct the IP-off-by-one. So add an extra check for PEBSv2 that also allows :pp Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: kan.liang@intel.com Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/r/1407456534-15747-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
HSW-EP has a larger offcore mask than the client Haswell CPUs. It is the same mask as on Sandy/IvyBridge-EP. All of Haswell was using the client mask, so some bits were missing. On the client parts some bits were also missing compared to Sandy/IvyBridge, in particular the bits to match on a L4 cache hit. The Haswell core in both client and server incarnations accepts the same bits (but some are nops), so we can use the same mask. So use the snbep extended mask, which is a superset of the client and the server, for all of Haswell. This allows specifying a number of extra offcore events, like for example for HSW-EP. % perf stat -e cpu/event=0xb7,umask=0x1,offcore_rsp=0x3fffc00100,name=offcore_response_pf_l3_rfo_l3_miss_any_response/ true which were <not supported> before. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Reviewed-by: eranian@google.com Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/r/1406840722-25416-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Fengguang Wu 提交于
arch/x86/kernel/cpu/perf_event_intel_uncore_nhmex.c:961:2-3: Unneeded semicolon arch/x86/kernel/cpu/perf_event_intel_uncore_nhmex.c:1100:2-3: Unneeded semicolon arch/x86/kernel/cpu/perf_event_intel_uncore_nhmex.c:1138:2-3: Unneeded semicolon Remove unneeded semicolon. Generated by: scripts/coccinelle/misc/semicolon.cocci Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Yan, Zheng <zheng.z.yan@intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/n/tip-ovfvr4nbqjo7nzc16y2lpjy9@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1406704935-27708-4-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1406704935-27708-3-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Stephane Eranian <eranian@google.com> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1406704935-27708-2-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
Prepare for moving hardware specific code to seperate files. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Stephane Eranian <eranian@google.com> Cc: eranian@google.com Cc: andi@firstfloor.org Link: http://lkml.kernel.org/r/1406704935-27708-1-git-send-email-zheng.z.yan@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
The model number descriptions got a bit messy, clean them up. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-oo3xclxdoy8s7ubssn929vaj@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 8月, 2014 1 次提交
-
-
由 Dan Carpenter 提交于
I don't know if we really need 64 bits here but these variables are declared as u64 and it can't hurt to cast this so we prevent any shift wrapping. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NAubrey Li <aubrey.li@linux.intel.com> Link: http://lkml.kernel.org/r/20140801082715.GE28869@mwandaSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 31 7月, 2014 1 次提交
-
-
由 Dave Hansen 提交于
I think the flush_tlb_mm_range() code that tries to tune the flush sizes based on the CPU needs to get ripped out for several reasons: 1. It is obviously buggy. It uses mm->total_vm to judge the task's footprint in the TLB. It should certainly be using some measure of RSS, *NOT* ->total_vm since only resident memory can populate the TLB. 2. Haswell, and several other CPUs are missing from the intel_tlb_flushall_shift_set() function. Thus, it has been demonstrated to bitrot quickly in practice. 3. It is plain wrong in my vm: [ 0.037444] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.037444] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.037444] tlb_flushall_shift: 6 Which leads to it to never use invlpg. 4. The assumptions about TLB refill costs are wrong: http://lkml.kernel.org/r/1337782555-8088-3-git-send-email-alex.shi@intel.com (more on this in later patches) 5. I can not reproduce the original data: https://lkml.org/lkml/2012/5/17/59 I believe the sample times were too short. Running the benchmark in a loop yields times that vary quite a bit. Note that this leaves us with a static ceiling of 1 page. This is a conservative, dumb setting, and will be revised in a later patch. This also removes the code which attempts to predict whether we are flushing data or instructions. We expect instruction flushes to be relatively rare and not worth tuning for explicitly. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Link: http://lkml.kernel.org/r/20140731154055.ABC88E89@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com> Acked-by: NMel Gorman <mgorman@suse.de> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 29 7月, 2014 1 次提交
-
-
由 Andy Lutomirski 提交于
This moves the espfix64 logic into native_iret. To make this work, it gets rid of the native patch for INTERRUPT_RETURN: INTERRUPT_RETURN on native kernels is now 'jmp native_iret'. This changes the 16-bit SS behavior on Xen from OOPSing to leaking some bits of the Xen hypervisor's RSP (I think). [ hpa: this is a nonzero cost on native, but probably not enough to measure. Xen needs to fix this in their own code, probably doing something equivalent to espfix64. ] Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/7b8f1d8ef6597cb16ae004a43c56980a7de3cf94.1406129132.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org>
-
- 26 7月, 2014 4 次提交
-
-
由 Andy Lutomirski 提交于
This commit in Linux 3.6: commit c767a54b Author: Joe Perches <joe@perches.com> Date: Mon May 21 19:50:07 2012 -0700 x86/debug: Add KERN_<LEVEL> to bare printks, convert printks to pr_<level> caused warn_bad_vsyscall to output garbage in the middle of the line. Revert the bad part of it. The printk in question isn't actually bare; the level is "%s". The bug this fixes is purely cosmetic; backports are optional. Cc: <stable@vger.kernel.org> # v3.6+ Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/03eac1f24110bbe496ecc12a4df467e0d88466d4.1406330947.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Li, Aubrey 提交于
Add the following interfaces to exposes PMC device state and sleep state residency via debugfs: /sys/kernel/debugfs/pmc_atom/dev_state /sys/kernel/debugfs/pmc_atom/sleep_state Signed-off-by: NAubrey Li <aubrey.li@linux.intel.com> Link: http://lkml.kernel.org/r/53B0FF59.8000600@linux.intel.comSigned-off-by: NKasagar, Srinidhi <srinidhi.kasagar@intel.com> Reviewed-by: NRudramuni, Vishwesh M <vishwesh.m.rudramuni@intel.com> Reviewed-by: NJoe Perches <joe@perches.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Li, Aubrey 提交于
Disable PMC S0IX_WAKE_EN events coming from LPC block(unused) and also from GPIO_SUS ored dedicated IRQs (must be disabled as per PMC programming rule), GPIOSCORE ored dedicated IRQs (must be disabled as per PMC programming rule), GPIO_SUS shared IRQ (not necessary since the IOAPIC_DS wake event will still work), GPIO_SCORE shared IRQ (not necessary since the IOAPIC_DS wake event will still work). Signed-off-by: NAubrey Li <aubrey.li@linux.intel.com> Link: http://lkml.kernel.org/r/53B0FF22.5080403@linux.intel.comSigned-off-by: NOlivier Leveque <olivier.leveque@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Li, Aubrey 提交于
The Power Management Controller (PMC) controls many of the power management features present in the Atom SoC. This driver provides a native power off function via PMC PCI IO port. On some ACPI hardware-reduced platforms(e.g. ASUS-T100), ACPI sleep registers are not valid so that (*pm_power_off)() is not hooked by acpi_power_off(). The power off function in this driver is installed only when pm_power_off is NULL. Signed-off-by: NAubrey Li <aubrey.li@linux.intel.com> Link: http://lkml.kernel.org/r/53B0FEEA.3010805@linux.intel.comSigned-off-by: NLejun Zhu <lejun.zhu@linux.intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 23 7月, 2014 3 次提交
-
-
由 Peter Zijlstra 提交于
P4 systems with cpuid level < 4 can have SMT, but the cache topology description available (cpuid2) does not include SMP information. Now we know that SMT shares all cache levels, and therefore we can mark all available cache levels as shared. We do this by setting cpu_llc_id to ->phys_proc_id, since that's the same for each SMT thread. We can do this unconditional since if there's no SMT its still true, the one CPU shares cache with only itself. This fixes a problem where such CPUs report an incorrect LLC CPU mask. This in turn fixes a crash in the scheduler where the topology was build wrong, it assumes the LLC mask to include at least the SMT CPUs. Cc: Josh Boyer <jwboyer@redhat.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: NBruno Wolff III <bruno@wolff.to> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140722133514.GM12054@laptop.lanSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Tomasz Nowicki 提交于
GHES currently maps two pages with atomic_ioremap. From now on, NMI is architectural depended so there is no need to allocate an NMI page for platforms without NMI support. To make it possible to not use a second page, swap the existing page order so that the IRQ context page is first, and the optional NMI context page is second. Then, use HAVE_ACPI_APEI_NMI to decide how many pages are to be allocated. Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tomasz Nowicki 提交于
This commit abstracts MCE calls and provides weak corresponding default implementation for those architectures which do not need arch specific actions. Each platform willing to do additional architectural actions should provides desired function definition. It allows us to avoid wrap code into #ifdef in generic code and prevent new platform from introducing dummy stub function too. Initially, there are two APEI arch-specific calls: - arch_apei_enable_cmcff() - arch_apei_report_mem_error() Both interact with MCE driver for X86 architecture. Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 22 7月, 2014 2 次提交
-
-
由 Sven Wegener 提交于
Commit 554086d8 ("x86_32, entry: Do syscall exit work on badsys (CVE-2014-4508)") introduced a regression in the x86_32 syscall entry code, resulting in syscall() not returning proper errors for undefined syscalls on CPUs supporting the sysenter feature. The following code: > int result = syscall(666); > printf("result=%d errno=%d error=%s\n", result, errno, strerror(errno)); results in: > result=666 errno=0 error=Success Obviously, the syscall return value is the called syscall number, but it should have been an ENOSYS error. When run under ptrace it behaves correctly, which makes it hard to debug in the wild: > result=-1 errno=38 error=Function not implemented The %eax register is the return value register. For debugging via ptrace the syscall entry code stores the complete register context on the stack. The badsys handlers only store the ENOSYS error code in the ptrace register set and do not set %eax like a regular syscall handler would. The old resume_userspace call chain contains code that clobbers %eax and it restores %eax from the ptrace registers afterwards. The same goes for the ptrace-enabled call chain. When ptrace is not used, the syscall return value is the passed-in syscall number from the untouched %eax register. Use %eax as the return value register in syscall_badsys and sysenter_badsys, like a real syscall handler does, and have the caller push the value onto the stack for ptrace access. Signed-off-by: NSven Wegener <sven.wegener@stealer.net> Link: http://lkml.kernel.org/r/alpine.LNX.2.11.1407221022380.31021@titan.int.lan.stealer.netReviewed-and-tested-by: NAndy Lutomirski <luto@amacapital.net> Cc: <stable@vger.kernel.org> # If 554086d8 is backported Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
BorisO reports that misc_register() fails often on xen. The current code unregisters the CPU hotplug notifier in that case. If then a CPU is offlined and onlined back again, we end up with a second timer running on that CPU, leading to soft lockups and system hangs. So let's leave the hotcpu notifier always registered - even if mce_device_create failed for some cores and never unreg it so that we can deal with the timer handling accordingly. Reported-and-Tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Link: http://lkml.kernel.org/r/1403274493-1371-1-git-send-email-boris.ostrovsky@oracle.comSigned-off-by: NBorislav Petkov <bp@suse.de>
-
- 19 7月, 2014 4 次提交
-
-
由 Daniel Kiper 提交于
We've got constants, so let's use them instead of hard-coded values. Signed-off-by: NDaniel Kiper <daniel.kiper@oracle.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
It appears that the BayTrail-T class of hardware requires EFI in order to powerdown and reboot and no other reliable method exists. This quirk is generally applicable to all hardware that has the ACPI Hardware Reduced bit set, since usually ACPI would be the preferred method. Cc: Len Brown <len.brown@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Matt Fleming 提交于
Implement efi_reboot(), which is really just a wrapper around the EfiResetSystem() EFI runtime service, but it does at least allow us to funnel all callers through a single location. It also simplifies the callsites since users no longer need to check to see whether EFI_RUNTIME_SERVICES are enabled. Cc: Tony Luck <tony.luck@intel.com> Tested-by: NMark Salter <msalter@redhat.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Link: http://lkml.kernel.org/r/53C54D32.6000000@zytor.comAcked-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 7月, 2014 2 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Link: http://lkml.kernel.org/r/53C54D18.3020602@zytor.comAcked-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Christoph Schulz 提交于
Commit 30919b0b ("x86: avoid low BIOS area when allocating address space") moved the test for resource allocations that fall within the first 1MB of address space from the PCI-specific path to a generic path, such that all resource allocations will avoid this area. However, this breaks ISA cards which need to allocate a memory region within the first 1MB. An example is the i82365 PCMCIA controller and derivatives like the Ricoh RF5C296/396 which map part of the PCMCIA socket memory address space into the first 1MB of system memory address space. They do not work anymore as no usable memory region exists due to this change: Intel ISA PCIC probe: Ricoh RF5C296/396 ISA-to-PCMCIA at port 0x3e0 ofs 0x00, 2 sockets host opts [0]: none host opts [1]: none ISA irqs (scanned) = 3,4,5,9,10 status change on irq 10 pcmcia_socket pcmcia_socket1: pccard: PCMCIA card inserted into slot 1 pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff pcmcia_socket pcmcia_socket0: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff pcmcia_socket pcmcia_socket0: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff pcmcia_socket pcmcia_socket0: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff pcmcia_socket pcmcia_socket0: cs: memory probe 0x0d0000-0x0dffff: clean. pcmcia_socket pcmcia_socket0: cs: memory probe 0x0e0000-0x0effff: clean. pcmcia_socket pcmcia_socket0: cs: memory probe 0x60000000-0x60ffffff: clean. pcmcia_socket pcmcia_socket0: cs: memory probe 0xa0000000-0xa0ffffff: clean. pcmcia_socket pcmcia_socket1: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff pcmcia_socket pcmcia_socket1: cs: IO port probe 0xa00-0xaff: clean. pcmcia_socket pcmcia_socket1: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff pcmcia_socket pcmcia_socket1: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff pcmcia_socket pcmcia_socket1: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff pcmcia_socket pcmcia_socket1: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff pcmcia_socket pcmcia_socket1: cs: memory probe 0x0d0000-0x0dffff: clean. pcmcia_socket pcmcia_socket1: cs: memory probe 0x0e0000-0x0effff: clean. pcmcia_socket pcmcia_socket1: cs: memory probe 0x60000000-0x60ffffff: clean. pcmcia_socket pcmcia_socket1: cs: memory probe 0xa0000000-0xa0ffffff: clean. pcmcia_socket pcmcia_socket1: cs: memory probe 0x0cc000-0x0effff: excluding 0xe0000-0xeffff pcmcia_socket pcmcia_socket1: cs: unable to map card memory! If filtering out the first 1MB is reverted, everything works as expected. Tested-by: NRobert Resch <fli4l@robert.reschpara.de> Signed-off-by: NChristoph Schulz <develop@kristov.de> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v2.6.37+
-
- 16 7月, 2014 8 次提交
-
-
由 Jan Beulich 提交于
With the conversion of the register saving code from macros to functions, and with those functions not clobbering most of the registers they spill, there's no need to annotate most of the spill operations; the only exceptions being %rbx (always modified) and %rcx (modified on the error_kernelspace: path). Also remove a bogus commented out annotation - there's no register %orig_rax after all. Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/53AAE69A020000780001D3C7@mail.emea.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andy Lutomirski 提交于
This commit: commit 6f6343f5 Author: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Date: Thu Apr 17 17:17:33 2014 +0900 kprobes/x86: Call exception handlers directly from do_int3/do_debug appears to have inadvertently dropped a check that the int3 came from kernel mode. Trying to dereference addr when addr is user-controlled is completely bogus. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/c4e339882c121aa76254f2adde3fcbdf502faec2.1405099506.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 David Rientjes 提交于
It's unnecessary to excessively spam the kernel log anytime the BTS buffer cannot be allocated, so make this allocation __GFP_NOWARN. The user probably will want to at least find some artifact that the allocation has failed in the past, probably due to fragmentation because of its large size, when it's not allocated at bootstrap. Thus, add a WARN_ONCE() so something is left behind for them to understand why perf commnads that require PEBS is not working properly. Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1406301600460.26302@chino.kir.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Zhouyi Zhou 提交于
According to Peter's advice, put the failure handling to a goto chain. Compiled in x86_64, could you check if there is anything that I missed. Signed-off-by: NZhouyi Zhou <yizhouzhou@ict.ac.cn> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1402459743-20513-1-git-send-email-zhouzhouyi@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Kan Liang 提交于
With -cpu host, KVM reports LBR and extra_regs support, if the host has support. When the guest perf driver tries to access LBR or extra_regs MSR, it #GPs all MSR accesses,since KVM doesn't handle LBR and extra_regs support. So check the related MSRs access right once at initialization time to avoid the error access at runtime. For reproducing the issue, please build the kernel with CONFIG_KVM_INTEL = y (for host kernel). And CONFIG_PARAVIRT = n and CONFIG_KVM_GUEST = n (for guest kernel). Start the guest with -cpu host. Run perf record with --branch-any or --branch-filter in guest to trigger LBR Run perf stat offcore events (E.g. LLC-loads/LLC-load-misses ...) in guest to trigger offcore_rsp #GP Signed-off-by: NKan Liang <kan.liang@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Maria Dimakopoulou <maria.n.dimakopoulou@gmail.com> Cc: Mark Davies <junk@eslaf.co.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> Cc: Yan, Zheng <zheng.z.yan@intel.com> Link: http://lkml.kernel.org/r/1405365957-20202-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Stephane Eranian 提交于
This patch fixes the SNB-EP and IVT Cbox filter mapping table. The table controls which filters are supported by which events. There were several mistakes in those tables causing some filters to be ignored, such as NID on TOR_INSERTS. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: zheng.z.yan@intel.com Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20140630144624.GA2604@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Vince Weaver 提交于
This was discussed back in February: https://lkml.org/lkml/2014/2/18/956 But I never saw a patch come out of it. On IvyBridge we share the SandyBridge cache event tables, but the dTLB-load-miss event is not compatible. Patch it up after the fact to the proper DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK Signed-off-by: NVince Weaver <vincent.weaver@maine.edu> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1407141528200.17214@vincent-weaver-1.umelst.maine.eduSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paul Bolle 提交于
Compile tested. "polling" is unused since commit f80c5b39 ("sched/idle, x86: Switch from TS_POLLING to TIF_POLLING_NRFLAG"). Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Jiri Kosina <jkosina@suse.cz> Link: http://lkml.kernel.org/r/1404138749.2978.6.camel@x41Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 7月, 2014 3 次提交
-
-
由 Boris Ostrovsky 提交于
init_espfix_ap() is currently off by one level when informing hypervisor that allocated pages will be used for ministacks' page tables. The most immediate effect of this on a PV guest is that if 'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor will refuse to use it for a page table (which it shouldn't be anyway). This will result in warnings by both Xen and Linux. More importantly, a subsequent write to that page (again, by a PV guest) is likely to result in fatal page fault. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.comReviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org>
-
由 Borislav Petkov 提交于
Distribute family-specific code to corresponding functions. Also, * move the direct mapping splitting around the TSEG SMM area to bsp_init_amd(). * kill ancient comment about what we should do for K5. * merge amd_k7_smp_check() into its only caller init_amd_k7 and drop cpu_has_mp macro. Signed-off-by: NBorislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1403609105-8332-3-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Borislav Petkov 提交于
Dump the flags which denote we have detected and/or have applied bug workarounds to the CPU we're executing on, in a similar manner to the feature flags. The advantage is that those are not accumulating over time like the CPU features. Signed-off-by: NBorislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1403609105-8332-2-git-send-email-bp@alien8.deSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 12 7月, 2014 1 次提交
-
-
由 Borislav Petkov 提交于
Both the 32-bit and 64-bit cmpxchg.h header define __HAVE_ARCH_CMPXCHG and there's ifdeffery which checks it. But since both bitness define it, we can just as well move it up to the main cmpxchg header and simpify a bit of code in doing that. Signed-off-by: NBorislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/20140711104338.GB17083@pd.tnicSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 05 7月, 2014 1 次提交
-
-
由 Rasmus Villemoes 提交于
Flipping the LSB doesn't require four lines of code. This shaves a few bytes of the generated code, including a branch. Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1403183731-15402-1-git-send-email-linux@rasmusvillemoes.dkSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 7月, 2014 1 次提交
-
-
由 HATAYAMA Daisuke 提交于
Currently, any NMI is falsely handled by a NMI handler of NMI watchdog if CondChgd bit in MSR_CORE_PERF_GLOBAL_STATUS MSR is set. For example, we use external NMI to make system panic to get crash dump, but in this case, the external NMI is falsely handled do to the issue. This commit deals with the issue simply by ignoring CondChgd bit. Here is explanation in detail. On x86 NMI watchdog uses performance monitoring feature to periodically signal NMI each time performance counter gets overflowed. intel_pmu_handle_irq() is called as a NMI_LOCAL handler from a NMI handler of NMI watchdog, perf_event_nmi_handler(). It identifies an owner of a given NMI by looking at overflow status bits in MSR_CORE_PERF_GLOBAL_STATUS MSR. If some of the bits are set, then it handles the given NMI as its own NMI. The problem is that the intel_pmu_handle_irq() doesn't distinguish CondChgd bit from other bits. Unlike the other status bits, CondChgd bit doesn't represent overflow status for performance counters. Thus, CondChgd bit cannot be thought of as a mark indicating a given NMI is NMI watchdog's. As a result, if CondChgd bit is set, any NMI is falsely handled by the NMI handler of NMI watchdog. Also, if type of the falsely handled NMI is either NMI_UNKNOWN, NMI_SERR or NMI_IO_CHECK, the corresponding action is never performed until CondChgd bit is cleared. I noticed this behavior on systems with Ivy Bridge processors: Intel Xeon CPU E5-2630 v2 and Intel Xeon CPU E7-8890 v2. On both systems, CondChgd bit in MSR_CORE_PERF_GLOBAL_STATUS MSR has already been set in the beginning at boot. Then the CondChgd bit is immediately cleared by next wrmsr to MSR_CORE_PERF_GLOBAL_CTRL MSR and appears to remain 0. On the other hand, on older processors such as Nehalem, Xeon E7540, CondChgd bit is not set in the beginning at boot. I'm not sure about exact behavior of CondChgd bit, in particular when this bit is set. Although I read Intel System Programmer's Manual to figure out that, the descriptions I found are: In 18.9.1: "The MSR_PERF_GLOBAL_STATUS MSR also provides a ¡sticky bit¢ to indicate changes to the state of performancmonitoring hardware" In Table 35-2 IA-32 Architectural MSRs 63 CondChg: status bits of this register has changed. These are different from the bahviour I see on the actual system as I explained above. At least, I think ignoring CondChgd bit should be enough for NMI watchdog perspective. Signed-off-by: NHATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: NDon Zickus <dzickus@redhat.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20140625.103503.409316067.d.hatayama@jp.fujitsu.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-