- 10 10月, 2014 2 次提交
-
-
由 Geert Uytterhoeven 提交于
The different architectures used their own (and different) declarations: extern __visible const void __nosave_begin, __nosave_end; extern const void __nosave_begin, __nosave_end; extern long __nosave_begin, __nosave_end; Consolidate them using the first variant in <asm/sections.h>. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
ARCH_USES_NUMA_PROT_NONE was defined for architectures that implemented _PAGE_NUMA using _PROT_NONE. This saved using an additional PTE bit and relied on the fact that PROT_NONE vmas were skipped by the NUMA hinting fault scanner. This was found to be conceptually confusing with a lot of implicit assumptions and it was asked that an alternative be found. Commit c46a7c81 "x86: define _PAGE_NUMA by reusing software bits on the PMD and PTE levels" redefined _PAGE_NUMA on x86 to be one of the swap PTE bits and shrunk the maximum possible swap size but it did not go far enough. There are no architectures that reuse _PROT_NONE as _PROT_NUMA but the relics still exist. This patch removes ARCH_USES_NUMA_PROT_NONE and removes some unnecessary duplication in powerpc vs the generic implementation by defining the types the core NUMA helpers expected to exist from x86 with their ppc64 equivalent. This necessitated that a PTE bit mask be created that identified the bits that distinguish present from NUMA pte entries but it is expected this will only differ between arches based on _PAGE_PROTNONE. The naming for the generic helpers was taken from x86 originally but ppc64 has types that are equivalent for the purposes of the helper so they are mapped instead of duplicating code. Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 10月, 2014 1 次提交
-
-
由 Al Viro 提交于
... rather than doing that in the guts of ->load_binary(). [updated to fix the bug spotted by Shentino - for SIGSEGV we really need something stronger than send_sig_info(); again, better do that in one place] Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 10月, 2014 3 次提交
-
-
由 Jan Beulich 提交于
Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/542291CA0200007800038085@mail.emea.novell.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jan Beulich 提交于
- don't include unneeded headers - drop redundant entry point label - complete unwind annotations - use .L prefix on local labels to not clutter the symbol table Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/5422917E0200007800038081@mail.emea.novell.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jan Beulich 提交于
- don't include unneeded headers - don't open-code PER_CPU_VAR() - drop redundant entry point label - complete unwind annotations - use .L prefix on local label to not clutter the symbol table Signed-off-by: NJan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/542290BC020000780003807D@mail.emea.novell.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 06 10月, 2014 1 次提交
-
-
由 Mukesh Rathor 提交于
This fixes two bugs in PVH guests: - Not setting EFER.NX means the NX bit in page table entries is ignored on Intel processors and causes reserved bit page faults on AMD processors. - After the Xen commit 7645640d6ff1 ("x86/PVH: don't set EFER_SCE for pvh guest") PVH guests are required to set EFER.SCE to enable the SYSCALL instruction. Secondary VCPUs are started with pagetables with the NX bit set so EFER.NX must be set before using any stack or data segment. xen_pvh_cpu_early_init() is the new secondary VCPU entry point that sets EFER before jumping to cpu_bringup_and_idle(). Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 03 10月, 2014 6 次提交
-
-
由 Juergen Gross 提交于
Size restrictions native kernels wouldn't have resulted from the initrd getting mapped into the initial mapping. The kernel doesn't really need the initrd to be mapped, so use infrastructure available in Xen to avoid the mapping and hence the restriction. Signed-off-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Pranith Kumar 提交于
Use the much more reader friendly ACCESS_ONCE() instead of the cast to volatile. This is purely a stylistic change. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no> Acked-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1411482607-20948-1-git-send-email-bobby.prani@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Wei Huang 提交于
PMU checking can fail due to various reasons. On native machine, this is mostly caused by faulty hardware and it is reasonable to use KERN_ERR in reporting. However, when kernel is running on virtualized environment, this checking can fail if virtual PMU is not supported (e.g. KVM on AMD host). It is annoying to see an error message on splash screen, even though we know such failure is benign on virtualized environment. This patch checks if the kernel is running in a virtualized environment. If so, it will use KERN_INFO in reporting, which reduces the syslog priority of them. This patch was tested successfully on KVM. Signed-off-by: NWei Huang <wei@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/r/1411617314-24659-1-git-send-email-wei@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
I was looking for the trinity oops cause in the uncore driver. (so far didn't found it) However I found this tiny race: when a box is set up two threads on the same CPU, they may be setting up the box in parallel (e.g. with kernel preemption). This could lead to the reference count being increasing too much. Always recheck there is no existing cpu reference inside the lock. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/r/1411424826-15629-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dave Hansen 提交于
Commit: cebf15eb ("x86, sched: Add new topology for multi-NUMA-node CPUs") some code to try to detect the situation where we have a NUMA node inside of the "DIE" sched domain. It detected this by looking for cpus which match_die() but do not match NUMA nodes via topology_same_node(). I wrote it up as: if (match_die(c, o) == !topology_same_node(c, o)) which actually seemed to work some of the time, albiet accidentally. It should have been doing an &&, not an ==. This code essentially chopped off the "DIE" domain on one of Andrew Morton's systems. He reported that this patch fixed his issue. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Dave Hansen <dave@sr71.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Rientjes <rientjes@google.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Jan Kiszka <jan.kiszka@siemens.com> Cc: Lan Tianyu <tianyu.lan@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Toshi Kani <toshi.kani@hp.com> Link: http://lkml.kernel.org/r/20140930214546.FD481CFF@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paolo Bonzini 提交于
This fixes the following OOPS: loaded kvm module (v3.17-rc1-168-gcec26bc3) BUG: unable to handle kernel paging request at fffffffffffffffe IP: [<ffffffff81168449>] put_page+0x9/0x30 PGD 1e15067 PUD 1e17067 PMD 0 Oops: 0000 [#1] PREEMPT SMP [<ffffffffa063271d>] ? kvm_vcpu_reload_apic_access_page+0x5d/0x70 [kvm] [<ffffffffa013b6db>] vmx_vcpu_reset+0x21b/0x470 [kvm_intel] [<ffffffffa0658816>] ? kvm_pmu_reset+0x76/0xb0 [kvm] [<ffffffffa064032a>] kvm_vcpu_reset+0x15a/0x1b0 [kvm] [<ffffffffa06403ac>] kvm_arch_vcpu_setup+0x2c/0x50 [kvm] [<ffffffffa062e540>] kvm_vm_ioctl+0x200/0x780 [kvm] [<ffffffff81212170>] do_vfs_ioctl+0x2d0/0x4b0 [<ffffffff8108bd99>] ? __mmdrop+0x69/0xb0 [<ffffffff812123d1>] SyS_ioctl+0x81/0xa0 [<ffffffff8112a6f6>] ? __audit_syscall_exit+0x1f6/0x2a0 [<ffffffff817229e9>] system_call_fastpath+0x16/0x1b Code: c6 78 ce a3 81 4c 89 e7 e8 d9 80 ff ff 0f 0b 4c 89 e7 e8 8f f6 ff ff e9 fa fe ff ff 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <48> f7 07 00 c0 00 00 55 48 89 e5 75 1e 8b 47 1c 85 c0 74 27 f0 RIP [<ffffffff81193045>] put_page+0x5/0x50 when not using the in-kernel irqchip ("-machine kernel_irqchip=off" with QEMU). The fix is to make the same check in kvm_vcpu_reload_apic_access_page that we already have in vmx.c's vm_need_virtualize_apic_accesses(). Reported-by: NJan Kiszka <jan.kiszka@siemens.com> Tested-by: NJan Kiszka <jan.kiszka@siemens.com> Fixes: 4256f43fSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 10月, 2014 3 次提交
-
-
由 Mathias Krause 提交于
This reverts commit 7da4b29d. Now, that the issue is fixed, we can re-enable the code. Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Mathias Krause 提交于
The defines for xkey3, xkey6 and xkey9 are not used in the code. They're probably left overs from merging the three source files for 128, 192 and 256 bit AES. They can safely be removed. Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Mathias Krause 提交于
The "by8" CTR AVX implementation fails to propperly handle counter overflows. That was the reason it got disabled in commit 7da4b29d ("crypto: aesni - disable "by8" AVX CTR optimization"). Fix the overflow handling by incrementing the counter block as a double quad word, i.e. a 128 bit, and testing for overflows afterwards. We need to use VPTEST to do so as VPADD* does not set the flags itself and silently drops the carry bit. As this change adds branches to the hot path, minor performance regressions might be a side effect. But, OTOH, we now have a conforming implementation -- the preferable goal. A tcrypt test on a SandyBridge system (i7-2620M) showed almost identical numbers for the old and this version with differences within the noise range. A dm-crypt test with the fixed version gave even slightly better results for this version. So the performance impact might not be as big as expected. Tested-by: NRomain Francoise <romain@orebokech.com> Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 27 9月, 2014 1 次提交
-
-
由 Alexei Starovoitov 提交于
done as separate commit to ease conflict resolution Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 9月, 2014 1 次提交
-
-
由 Matt Fleming 提交于
If we're executing the 32-bit efi_char16_printk() code path (i.e. running on top of 32-bit firmware) we know that efi_early->text_output will be a 32-bit value, even though ->text_output has type u64. Unfortunately, we currently pass ->text_output directly to efi_early->call() so for CONFIG_X86_32 the compiler will push a 64-bit value onto the stack, causing the other parameters to be misaligned. The way we handle this in the rest of the EFI boot stub is to pass pointers as arguments to efi_early->call(), which automatically do the right thing (pointers are 32-bit on CONFIG_X86_32, and we simply ignore the upper 32-bits of the argument register if running in 64-bit mode with 32-bit firmware). This fixes a corruption bug when printing strings from the 32-bit EFI boot stub. Link: https://bugzilla.kernel.org/show_bug.cgi?id=84241Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 24 9月, 2014 22 次提交
-
-
由 Oleg Nesterov 提交于
Trivial. We have "lib-y += thunk_$(BITS).o" at the start, no need to add thunk_64.o if !CONFIG_X86_32. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Acked-by: NAndy Lutomirski <luto@amacapital.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20140921184232.GB23727@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Oleg Nesterov 提交于
___preempt_schedule() does SAVE_ALL/RESTORE_ALL but this is suboptimal, we do not need to save/restore the callee-saved register. And we already have arch/x86/lib/thunk_*.S which implements the similar asm wrappers, so it makes sense to redefine ___preempt_schedule() as "THUNK ..." and remove preempt.S altogether. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NAndy Lutomirski <luto@amacapital.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20140921184153.GA23727@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mathias Krause 提交于
The "by8" implementation introduced in commit 22cddcc7 ("crypto: aes - AES CTR x86_64 "by8" AVX optimization") is failing crypto tests as it handles counter block overflows differently. It only accounts the right most 32 bit as a counter -- not the whole block as all other implementations do. This makes it fail the cryptomgr test #4 that specifically tests this corner case. As we're quite late in the release cycle, just disable the "by8" variant for now. Reported-by: NRomain Francoise <romain@orebokech.com> Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Chandramouli Narayanan <mouli@linux.intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Wanpeng Li 提交于
The following bug can be triggered by hot adding and removing a large number of xen domain0's vcpus repeatedly: BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 IP: [..] find_busiest_group PGD 5a9d5067 PUD 13067 PMD 0 Oops: 0000 [#3] SMP [...] Call Trace: load_balance ? _raw_spin_unlock_irqrestore idle_balance __schedule schedule schedule_timeout ? lock_timer_base schedule_timeout_uninterruptible msleep lock_device_hotplug_sysfs online_store dev_attr_store sysfs_write_file vfs_write SyS_write system_call_fastpath Last level cache shared mask is built during CPU up and the build_sched_domain() routine takes advantage of it to setup the sched domain CPU topology. However, llc_shared_mask is not released during CPU disable, which leads to an invalid sched domainCPU topology. This patch fix it by releasing the llc_shared_mask correctly during CPU disable. Yasuaki also reported that this can happen on real hardware: https://lkml.org/lkml/2014/7/22/1018 His case is here: == Here is an example on my system. My system has 4 sockets and each socket has 15 cores and HT is enabled. In this case, each core of sockes is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-44, 90-104 Socket#3 | 45-59, 105-119 Then llc_shared_mask of CPU#30 has 0x3fff80000001fffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-44 and 90-104. When hot-removing socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 But llc_shared_mask is not cleared. So llc_shared_mask of CPU#30 remains having 0x3fff80000001fffc0000000. After that, when hot-adding socket#2 and #3, each core of sockets is numbered as follows: | CPU# Socket#0 | 0-14 , 60-74 Socket#1 | 15-29, 75-89 Socket#2 | 30-59 Socket#3 | 90-119 Then llc_shared_mask of CPU#30 becomes 0x3fff8000fffffffc0000000. It means that last level cache of Socket#2 is shared with CPU#30-59 and 90-104. So the mask has the wrong value. Signed-off-by: NWanpeng Li <wanpeng.li@linux.intel.com> Tested-by: NLinn Crosetto <linn@hp.com> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NToshi Kani <toshi.kani@hp.com> Reviewed-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: <stable@vger.kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1411547885-48165-1-git-send-email-wanpeng.li@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Stephane Eranian 提交于
This patch restructures the memory controller (IMC) uncore PMU support for client SNB/IVB/HSW processors. The main change is that it can now cope with more than one PCI device ID per processor model. There are many flavors of memory controllers for each processor. They have different PCI device ID, yet they behave the same w.r.t. the memory controller PMU that we are interested in. The patch now supports two distinct memory controllers for IVB processors: one for mobile, one for desktop. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140917090616.GA11281@quad Cc: ak@linux.intel.com Cc: kan.liang@intel.com Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
The PCU frequency band filters use 8 bit each in a register. When setting up the value the shift value was not correctly scaled, which resulted in all filters except for band 0 to be zero. Fix the scaling. This allows to correctly monitor multiple uncore frequency bands. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409872109-31645-5-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
The IvyBridge-EP uncore driver was missing three filter flags: NC, ISOC, C6 which are useful in some cases. Support them in the same way as the Haswell EP driver, by allowing to set them and exposing them in the sysfs formats. Also fix a typo in a define. Relies on the Haswell EP driver to be applied earlier. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: http://lkml.kernel.org/r/1409872109-31645-4-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
Current code registers PMUs for all possible uncore pci devices. This is not good because, on some machines, one or more uncore pci devices can be missing. The missing pci device make corresponding PMU unusable. Register the PMU only if the uncore device exists. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409872109-31645-3-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Yan, Zheng 提交于
The uncore subsystem in Haswell-EP is similar to Sandy/Ivy Bridge-EP. There are some differences in config register encoding and pci device IDs. The Haswell-EP uncore also supports a few new events. Add the Haswell-EP driver to the snbep split driver. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> [ Add missing break. Add imc events. Add cbox nc/isoc/c6. ] Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409872109-31645-2-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
Use the newly added Broadwell cache event list for Haswell too. All Haswell and Broadwell events and offcore masks used in these lists are identical. However Haswell is very different from the Sandy Bridge list that was used previously. That fixes a wide range of mis-counting cache events. The node events are now only for retired memory events, so prefetching and speculative memory accesses are not included. They are PEBS capable now, which makes it much easier to sample for them, plus it's possible to create address maps with -d. The prefetch events are gone now. They way the hardware counts them is very misleading (some prefetches included, others not), so it seemed best to leave them out. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409683455-29168-5-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
On Broadwell INST_RETIRED.ALL cannot be used with any period that doesn't have the lowest 6 bits cleared. And the period should not be smaller than 128. Add a new callback to enforce this, and set it for Broadwell. This is erratum BDM57 and BDM11. How does this handle the case when an app requests a specific period with some of the bottom bits set The apps thinks it is sampling at X occurences per sample, when it is in fact at X - 63 (worst case). Short answer: Any useful instruction sampling period needs to be 4-6 orders of magnitude larger than 128, as an PMI every 128 instructions would instantly overwhelm the system and be throttled. So the +-64 error from this is really small compared to the period, much smaller than normal system jitter. Long answer: <write up by Peter:> IFF we guarantee perf_event_attr::sample_period >= 128. Suppose we start out with sample_period=192; then we'll set period_left to 192, we'll end up with left = 128 (we truncate the lower bits). We get an interrupt, find that period_left = 64 (>0 so we return 0 and don't get an overflow handler), up that to 128. Then we trigger again, at n=256. Then we find period_left = -64 (<=0 so we return 1 and do get an overflow). We increment with sample_period so we get left = 128. We fire again, at n=384, period_left = 0 (<=0 so we return 1 and get an overflow). And on and on. So while the individual interrupts are 'wrong' we get then with interval=256,128 in exactly the right ratio to average out at 192. And this works for everything >=128. So the num_samples*fixed_period thing is still entirely correct +- 127, which is good enough I'd say, as you already have that error anyhow. So no need to 'fix' the tools, al we need to do is refuse to create INST_RETIRED:ALL events with sample_period < 128. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Maria Dimakopoulou <maria.n.dimakopoulou@gmail.com> Cc: Mark Davies <junk@eslaf.co.uk> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1409683455-29168-4-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
Add Broadwell support for Broadwell Client to perf. This is very similar to Haswell. It uses a new cache event table, because there were various changes there. The constraint list has one new event that needs to be handled over Haswell. The PEBS event list is the same, so we reuse Haswell's. [fengguang.wu: make intel_bdw_event_constraints[] static] Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409683455-29168-3-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
Add names for each Haswell model as requested by Peter. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Link: http://lkml.kernel.org/r/1409683455-29168-2-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Andi Kleen 提交于
71 is a Broadwell, not a Haswell. The model number was added by mistake earlier. Remove it for now, until it can be re-added later with real Broadwell support. In practice it does not cause a lot of issues because the Broadwell PMU is very similar to Haswell, but some details were wrong, and it's better to handle it correctly. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: eranian@google.com Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Link: http://lkml.kernel.org/r/1409683455-29168-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dave Hansen 提交于
I'm getting the spew below when booting with Haswell (Xeon E5-2699 v3) CPUs and the "Cluster-on-Die" (CoD) feature enabled in the BIOS. It seems similar to the issue that some folks from AMD ran in to on their systems and addressed in this commit: 161270fc ("x86/smp: Fix topology checks on AMD MCM CPUs") Both these Intel and AMD systems break an assumption which is being enforced by topology_sane(): a socket may not contain more than one NUMA node. AMD special-cased their system by looking for a cpuid flag. The Intel mode is dependent on BIOS options and I do not know of a way which it is enumerated other than the tables being parsed during the CPU bringup process. In other words, we have to trust the ACPI tables <shudder>. This detects the situation where a NUMA node occurs at a place in the middle of the "CPU" sched domains. It replaces the default topology with one that relies on the NUMA information from the firmware (SRAT table) for all levels of sched domains above the hyperthreads. This also fixes a sysfs bug. We used to freak out when we saw the "mc" group cross a node boundary, so we stopped building the MC group. MC gets exported as the 'core_siblings_list' in /sys/devices/system/cpu/cpu*/topology/ and this caused CPUs with the same 'physical_package_id' to not be listed together in 'core_siblings_list'. This violates a statement from Documentation/ABI/testing/sysfs-devices-system-cpu: core_siblings: internal kernel map of cpu#'s hardware threads within the same physical_package_id. core_siblings_list: human-readable list of the logical CPU numbers within the same physical_package_id as cpu#. The sysfs effects here cause an issue with the hwloc tool where it gets confused and thinks there are more sockets than are physically present. Before this patch, there are two packages: # cd /sys/devices/system/cpu/ # cat cpu*/topology/physical_package_id | sort | uniq -c 18 0 18 1 But 4 _sets_ of core siblings: # cat cpu*/topology/core_siblings_list | sort | uniq -c 9 0-8 9 18-26 9 27-35 9 9-17 After this set, there are only 2 sets of core siblings, which is what we expect for a 2-socket system. # cat cpu*/topology/physical_package_id | sort | uniq -c 18 0 18 1 # cat cpu*/topology/core_siblings_list | sort | uniq -c 18 0-17 18 18-35 Example spew: ... NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter. #2 #3 #4 #5 #6 #7 #8 .... node #1, CPUs: #9 ------------[ cut here ]------------ WARNING: CPU: 9 PID: 0 at /home/ak/hle/linux-hle-2.6/arch/x86/kernel/smpboot.c:306 topology_sane.isra.2+0x74/0x90() sched: CPU #9's mc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency. Modules linked in: CPU: 9 PID: 0 Comm: swapper/9 Not tainted 3.17.0-rc1-00293-g8e01c4d-dirty #631 Hardware name: Intel Corporation S2600WTT/S2600WTT, BIOS GRNDSDP1.86B.0036.R05.1407140519 07/14/2014 0000000000000009 ffff88046ddabe00 ffffffff8172e485 ffff88046ddabe48 ffff88046ddabe38 ffffffff8109691d 000000000000b001 0000000000000009 ffff88086fc12580 000000000000b020 0000000000000009 ffff88046ddabe98 Call Trace: [<ffffffff8172e485>] dump_stack+0x45/0x56 [<ffffffff8109691d>] warn_slowpath_common+0x7d/0xa0 [<ffffffff8109698c>] warn_slowpath_fmt+0x4c/0x50 [<ffffffff81074f94>] topology_sane.isra.2+0x74/0x90 [<ffffffff8107530e>] set_cpu_sibling_map+0x31e/0x4f0 [<ffffffff8107568d>] start_secondary+0x1ad/0x240 ---[ end trace 3fe5f587a9fcde61 ]--- #10 #11 #12 #13 #14 #15 #16 #17 .... node #2, CPUs: #18 #19 #20 #21 #22 #23 #24 #25 #26 .... node #3, CPUs: #27 #28 #29 #30 #31 #32 #33 #34 #35 Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> [ Added LLC domain and s/match_mc/match_die/ ] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: David Rientjes <rientjes@google.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Toshi Kani <toshi.kani@hp.com> Cc: brice.goglin@gmail.com Cc: "H. Peter Anvin" <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/20140918193334.C065EBCE@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Mathias Krause 提交于
The pci_find_bios() function is only ever called from initialization code, therefore can be marked as such, too. This, in turn, allows marking other functions called only in this context as well. The bios32_indirect variable can be marked as __initdata as it is only referenced from __init functions now. Signed-off-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NIngo Molnar <mingo@kernel.org>
-
由 Mathias Krause 提交于
The pci_mmcfg_probes[] array is only ever read, therefore make it const. Signed-off-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NIngo Molnar <mingo@kernel.org>
-
由 Mathias Krause 提交于
The constants in pci_mmcfg_nvidia_mcp55() need to be marked as __initconst or they will remain in memory after init memory was released. Signed-off-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NIngo Molnar <mingo@kernel.org>
-
由 Mathias Krause 提交于
According to include/linux/init.h, the __init annotation should be added immediately before the function name. However, for quite a few functions in mmconfig-shared.c this is not the case. It's either before the return type or even in the middle of it. Beside gcc still getting it right, we should change them to comply to the rules of include/linux/init.h. Signed-off-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NIngo Molnar <mingo@kernel.org>
-
由 Tang Chen 提交于
In order to make the APIC access page migratable, stop pinning it in memory. And because the APIC access page is not pinned in memory, we can remove kvm_arch->apic_access_page. When we need to write its physical address into vmcs, we use gfn_to_page() to get its page struct, which is needed to call page_to_phys(); the page is then immediately unpinned. Suggested-by: NGleb Natapov <gleb@kernel.org> Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tang Chen 提交于
Currently, the APIC access page is pinned by KVM for the entire life of the guest. We want to make it migratable in order to make memory hot-unplug available for machines that run KVM. This patch prepares to handle this for the case where there is no nested virtualization, or where the nested guest does not have an APIC page of its own. All accesses to kvm->arch.apic_access_page are changed to go through kvm_vcpu_reload_apic_access_page. If the APIC access page is invalidated when the host is running, we update the VMCS in the next guest entry. If it is invalidated when the guest is running, the MMU notifier will force an exit, after which we will handle everything as in the previous case. If it is invalidated when a nested guest is running, the request will update either the VMCS01 or the VMCS02. Updating the VMCS01 is done at the next L2->L1 exit, while updating the VMCS02 is done in prepare_vmcs02. Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tang Chen 提交于
Currently, the APIC access page is pinned by KVM for the entire life of the guest. We want to make it migratable in order to make memory hot-unplug available for machines that run KVM. This patch prepares to handle this in generic code, through a new request bit (that will be set by the MMU notifier) and a new hook that is called whenever the request bit is processed. Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-