- 08 10月, 2016 16 次提交
-
-
由 Dave Hansen 提交于
Our XSAVE features are divided into two categories: those that generate FPU exceptions, and those that do not. MPX and pkeys do not generate FPU exceptions and thus can not be used lazily. We disable them when lazy mode is forced on. We have a pair of masks to collect these two sets of features, but XFEATURE_MASK_PKRU was added to the wrong mask: XFEATURE_MASK_LAZY. Fix it by moving the feature to XFEATURE_MASK_EAGER. Note: this only causes problem if you boot with lazy FPU mode (eagerfpu=off) which is *not* the default. It also only affects hardware which is not currently publicly available. It looks like eager mode is going away, but we still need this patch applied to any kernel that has protection keys and lazy mode, which is 4.6 through 4.8 at this point, and 4.9 if the lazy removal isn't sent to Linus for 4.9. Fixes: c8df4009 ("x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures") Signed-off-by: NDave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave@sr71.net> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20161007162342.28A49813@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Markus reported that he sees new warnings: APIC: NR_CPUS/possible_cpus limit of 4 reached. Processor 4/0x84 ignored. APIC: NR_CPUS/possible_cpus limit of 4 reached. Processor 5/0x85 ignored. This comes from the recent persistant cpuid - nodeid changes. The code which emits the warning has been called prior to these changes only for enabled processors. Now it's called for disabled processors as well to get the possible cpu accounting correct. So if the kernel is compiled for the number of actual available/enabled CPUs and the BIOS reports disabled CPUs as well then the above warnings are printed. That's a pointless exercise as it only makes sense if there are more CPUs enabled than the kernel supports. Nake the warning conditional on enabled processors so we are back to the state before these changes. Fixes: 8f54969d ("x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping") Reported-and-tested-by: NMarkus Trippelsdorf <markus@trippelsdorf.de> Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> Cc: Dou Liyang <douly.fnst@cn.fujitsu.com> Cc: linux-acpi@vger.kernel.org Cc: Gu Zheng <guz.fnst@cn.fujitsu.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610071549330.19804@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Yinghai reported that the recent changes to make the cpuid - nodeid relationship permanent causes a cpuid ordering regression on a system which has 2apic enabled.. The reason is that the ACPI local APIC parser has no sanity check for apicid 0xff, which is an invalid id. So a CPU id for this invalid local APIC id is allocated and therefor breaks the cpuid ordering. Add a sanity check to acpi_parse_lapic() which ignores the invalid id. Fixes: 8f54969d ("x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping") Reported-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>, Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: douly.fnst@cn.fujitsu.com, Cc: zhugh.fnst@cn.fujitsu.com Cc: Tony Luck <tony.luck@intel.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Len Brown <lenb@kernel.org> Cc: Lv Zheng <lv.zheng@intel.com>, Cc: robert.moore@intel.com Cc: linux-acpi@vger.kernel.org Link: https://lkml.kernel.org/r/CAE9FiQVQx6FRXT-RdR7Crz4dg5LeUWHcUSy1KacjR+JgU_vGJg@mail.gmail.com
-
由 Alexey Dobriyan 提交于
Current supplementary groups code can massively overallocate memory and is implemented in a way so that access to individual gid is done via 2D array. If number of gids is <= 32, memory allocation is more or less tolerable (140/148 bytes). But if it is not, code allocates full page (!) regardless and, what's even more fun, doesn't reuse small 32-entry array. 2D array means dependent shifts, loads and LEAs without possibility to optimize them (gid is never known at compile time). All of the above is unnecessary. Switch to the usual trailing-zero-len-array scheme. Memory is allocated with kmalloc/vmalloc() and only as much as needed. Accesses become simpler (LEA 8(gi,idx,4) or even without displacement). Maximum number of gids is 65536 which translates to 256KB+8 bytes. I think kernel can handle such allocation. On my usual desktop system with whole 9 (nine) aux groups, struct group_info shrinks from 148 bytes to 44 bytes, yay! Nice side effects: - "gi->gid[i]" is shorter than "GROUP_AT(gi, i)", less typing, - fix little mess in net/ipv4/ping.c should have been using GROUP_AT macro but this point becomes moot, - aux group allocation is persistent and should be accounted as such. Link: http://lkml.kernel.org/r/20160817201927.GA2096@p183.telecom.bySigned-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Cc: Vasily Kulikov <segoon@openwall.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marcin Nowakowski 提交于
The declarations of arch-specific functions have been moved to a common header in commit 3820b4d2 ('uprobes: Move function declarations out of arch'), but MIPS and S390 has added them to their own trees later. Remove the unnecessary duplicates. Link: http://lkml.kernel.org/r/1472804384-17830-1-git-send-email-marcin.nowakowski@imgtec.comSigned-off-by: NMarcin Nowakowski <marcin.nowakowski@imgtec.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
When doing an nmi backtrace of many cores, most of which are idle, the output is a little overwhelming and very uninformative. Suppress messages for cpus that are idling when they are interrupted and just emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN". We do this by grouping all the cpuidle code together into a new .cpuidle.text section, and then checking the address of the interrupted PC to see if it lies within that section. This commit suitably tags x86 and tile idle routines, and only adds in the minimal framework for other architectures. Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Tested-by: NPetr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
Previously tile was rolling its own method of capturing backtrace data in the NMI handlers, but it was relying on running printk() from the NMI handler, which is not always safe. So adopt the nmi_backtrace model (with the new cpumask extension) instead. So we can call the nmi_backtrace code directly from the nmi handler, move the nmi_enter()/exit() into the top-level tile NMI handler. The semantics of the routine change slightly since it is now synchronous with the remote cores completing the backtraces. Previously it was asynchronous, but with protection to avoid starting a new remote backtrace if the old one was still in progress. Link: http://lkml.kernel.org/r/1472487169-14923-4-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com> Cc: Daniel Thompson <daniel.thompson@linaro.org> [arm] Cc: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
Currently on arm there is code that checks whether it should call dump_stack() explicitly, to avoid trying to raise an NMI when the current context is not preemptible by the backtrace IPI. Similarly, the forthcoming arch/tile support uses an IPI mechanism that does not support generating an NMI to self. Accordingly, move the code that guards this case into the generic mechanism, and invoke it unconditionally whenever we want a backtrace of the current cpu. It seems plausible that in all cases, dump_stack() will generate better information than generating a stack from the NMI handler. The register state will be missing, but that state is likely not particularly helpful in any case. Or, if we think it is helpful, we should be capturing and emitting the current register state in all cases when regs == NULL is passed to nmi_cpu_backtrace(). Link: http://lkml.kernel.org/r/1472487169-14923-3-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Reviewed-by: NPetr Mladek <pmladek@suse.com> Acked-by: NAaron Tomlin <atomlin@redhat.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Chris Metcalf 提交于
Patch series "improvements to the nmi_backtrace code" v9. This patch series modifies the trigger_xxx_backtrace() NMI-based remote backtracing code to make it more flexible, and makes a few small improvements along the way. The motivation comes from the task isolation code, where there are scenarios where we want to be able to diagnose a case where some cpu is about to interrupt a task-isolated cpu. It can be helpful to see both where the interrupting cpu is, and also an approximation of where the cpu that is being interrupted is. The nmi_backtrace framework allows us to discover the stack of the interrupted cpu. I've tested that the change works as desired on tile, and build-tested x86, arm, mips, and sparc64. For x86 I confirmed that the generic cpuidle stuff as well as the architecture-specific routines are in the new cpuidle section. For arm, mips, and sparc I just build-tested it and made sure the generic cpuidle routines were in the new cpuidle section, but I didn't attempt to figure out which the platform-specific idle routines might be. That might be more usefully done by someone with platform experience in follow-up patches. This patch (of 4): Currently you can only request a backtrace of either all cpus, or all cpus but yourself. It can also be helpful to request a remote backtrace of a single cpu, and since we want that, the logical extension is to support a cpumask as the underlying primitive. This change modifies the existing lib/nmi_backtrace.c code to take a cpumask as its basic primitive, and modifies the linux/nmi.h code to use the new "cpumask" method instead. The existing clients of nmi_backtrace (arm and x86) are converted to using the new cpumask approach in this change. The other users of the backtracing API (sparc64 and mips) are converted to use the cpumask approach rather than the all/allbutself approach. The mips code ignored the "include_self" boolean but with this change it will now also dump a local backtrace if requested. Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm] Reviewed-by: NAaron Tomlin <atomlin@redhat.com> Reviewed-by: NPetr Mladek <pmladek@suse.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vineet Gupta 提交于
This came to light when implementing native 64-bit atomics for ARCv2. The atomic64 self-test code uses CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE to check whether atomic64_dec_if_positive() is available. It seems it was needed when not every arch defined it. However as of current code the Kconfig option seems needless - for CONFIG_GENERIC_ATOMIC64 it is auto-enabled in lib/Kconfig and a generic definition of API is present lib/atomic64.c - arches with native 64-bit atomics select it in arch/*/Kconfig and define the API in their headers So I see no point in keeping the Kconfig option Compile tested for: - blackfin (CONFIG_GENERIC_ATOMIC64) - x86 (!CONFIG_GENERIC_ATOMIC64) - ia64 Link: http://lkml.kernel.org/r/1473703083-8625-3-git-send-email-vgupta@synopsys.comSigned-off-by: NVineet Gupta <vgupta@synopsys.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ming Lin <ming.l@ssi.samsung.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@suse.de> Cc: Andi Kleen <ak@linux.intel.com> Cc: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vineet Gupta 提交于
This is based on s390 version and needed to get rid of CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE Link: http://lkml.kernel.org/r/1473703083-8625-2-git-send-email-vgupta@synopsys.comSigned-off-by: NVineet Gupta <vgupta@synopsys.com> Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yisheng Xie 提交于
Arm64 supports gigantic pages after commit 084bd298 ("ARM64: mm: HugeTLB support.") however, it can only be allocated at boottime and can't be freed. This patch selects ARCH_HAS_GIGANTIC_PAGE to make gigantic pages can be allocated and freed at runtime for arch arm64. Link: http://lkml.kernel.org/r/1475227569-63446-3-git-send-email-xieyisheng1@huawei.comSigned-off-by: NYisheng Xie <xieyisheng1@huawei.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yisheng Xie 提交于
Avoid making ifdef get pretty unwieldy if many ARCHs support gigantic page. No functional change with this patch. Link: http://lkml.kernel.org/r/1475227569-63446-2-git-send-email-xieyisheng1@huawei.comSigned-off-by: NYisheng Xie <xieyisheng1@huawei.com> Suggested-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Baoyou Xie 提交于
We get 1 warning when building kernel with W=1: drivers/char/mem.c:220:12: warning: no previous prototype for 'phys_mem_access_prot_allowed' [-Wmissing-prototypes] int __weak phys_mem_access_prot_allowed(struct file *file, In fact, its declaration is spreading to several header files in different architecture, but need to be declare in common header file. So this patch moves phys_mem_access_prot_allowed() to pgtable.h. Link: http://lkml.kernel.org/r/1473751597-12139-1-git-send-email-baoyou.xie@linaro.orgSigned-off-by: NBaoyou Xie <baoyou.xie@linaro.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Srikar Dronamraju 提交于
Currently significant amount of memory is reserved only in kernel booted to capture kernel dump using the fa_dump method. Kernels compiled with CONFIG_DEFERRED_STRUCT_PAGE_INIT will initialize only certain size memory per node. The certain size takes into account the dentry and inode cache sizes. Currently the cache sizes are calculated based on the total system memory including the reserved memory. However such a kernel when booting the same kernel as fadump kernel will not be able to allocate the required amount of memory to suffice for the dentry and inode caches. This results in crashes like Hence only implement arch_reserved_kernel_pages() for CONFIG_FA_DUMP configurations. The amount reserved will be reduced while calculating the large caches and will avoid crashes like the below on large systems such as 32 TB systems. Dentry cache hash table entries: 536870912 (order: 16, 4294967296 bytes) vmalloc: allocation failure, allocated 4097114112 of 17179934720 bytes swapper/0: page allocation failure: order:0, mode:0x2080020(GFP_ATOMIC) CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6-master+ #3 Call Trace: dump_stack+0xb0/0xf0 (unreliable) warn_alloc_failed+0x114/0x160 __vmalloc_node_range+0x304/0x340 __vmalloc+0x6c/0x90 alloc_large_system_hash+0x1b8/0x2c0 inode_init+0x94/0xe4 vfs_caches_init+0x8c/0x13c start_kernel+0x50c/0x578 start_here_common+0x20/0xa8 Link: http://lkml.kernel.org/r/1472476010-4709-4-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com> Suggested-by: NMel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Helge Deller 提交于
Increase the initial kernel default page mapping size for SMP kernels to 32MB and add a runtime check which panics early if the kernel is bigger than the initial mapping size. This fixes boot crashes of 32bit SMP kernels. Due to the introduction of huge page support in kernel 4.4 and it's required initial kernel layout in memory, a 32bit SMP kernel usually got bigger (in layout, not size) than 16MB. Cc: stable@vger.kernel.org #4.4+ Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 07 10月, 2016 3 次提交
-
-
由 Helge Deller 提交于
Memblock is the standard kernel boot-time memory tracker/allocator. Use it instead of the bootmem allocator. This allows using kmemleak, CMA and other features. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Prarit Bhargava 提交于
When a CPU is physically added to a system then the MADT table is not updated. If subsequently a kdump kernel is started on that physically added CPU then the ACPI enumeration fails to provide the information for this CPU which is now the boot CPU of the kdump kernel. As a consequence, generic_processor_info() is not invoked for that CPU so the number of enumerated processors is 0 and none of the initializations, including the logical package id management, are performed. We have code which relies on the correctness of the logical package map and other information which is initialized via generic_processor_info(). Executing such code will result in undefined behaviour or kernel crashes. This problem applies only to the kdump kernel because a normal kexec will switch to the original boot CPU, which is enumerated in MADT, before jumping into the kexec kernel. The boot code already has a check for num_processors equal 0 in prefill_possible_map(). We can use that check as an indicator that the enumeration of the boot CPU did not happen and invoke generic_processor_info() for it. That initializes the relevant data for the boot CPU and therefore prevents subsequent failure. [ tglx: Refined the code and rewrote the changelog ] Signed-off-by: NPrarit Bhargava <prarit@redhat.com> Fixes: 1f12e32f ("x86/topology: Create logical package id") Cc: Peter Zijlstra <peterz@infradead.org> Cc: Len Brown <len.brown@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: dyoung@redhat.com Cc: Eric Biederman <ebiederm@xmission.com> Cc: kexec@lists.infradead.org Link: http://lkml.kernel.org/r/1475514432-27682-1-git-send-email-prarit@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Helge Deller 提交于
Add hardened usercopy checks to parisc architecture and clean up indenting. Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 06 10月, 2016 10 次提交
-
-
由 Boris Ostrovsky 提交于
Early during boot topology_update_package_map() computes logical_pkg_ids for all present processors. Later, when processors are brought up, identify_cpu() updates these values based on phys_pkg_id which is a function of initial_apicid. On PV guests the latter may point to a non-existing node, causing logical_pkg_ids to be set to -1. Intel's RAPL uses logical_pkg_id (as topology_logical_package_id()) to index its arrays and therefore in this case will point to index 65535 (since logical_pkg_id is a u16). This could lead to either a crash or may actually access random memory location. As a workaround, we recompute topology during CPU bringup to reset logical_pkg_id to a valid value. (The reason for initial_apicid being bogus is because it is initial_apicid of the processor from which the guest is launched. This value is CPUID(1).EBX[31:24]) Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Ley Foon Tan 提交于
Use of_property_read_bool instead of open-coding it as fpcu_has. Convert the members of struct cpuinfo from u32 to bool accordingly as they are only used as boolean anyhow. Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Acked-by: NLey Foon Tan <lftan@altera.com>
-
由 Josh Poimboeuf 提交于
When compiling on x86 with CONFIG_OPROFILE=m and CONFIG_FRAME_POINTER=n, the oprofile module fails to link: ERROR: ftrace_graph_ret_addr" [arch/x86/oprofile/oprofile.ko] undefined! The problem was introduced when oprofile was converted to use the new x86 unwinder. When frame pointers are disabled, the "guess" unwinder's unwind_get_return_address() is an inline function which calls ftrace_graph_ret_addr(), which is not exported. Fix it by converting the "guess" version of unwind_get_return_address() to an exported out-of-line function, just like its frame pointer counterpart. Reported-by: NKarl Beldan <karl.beldan@gmail.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: ec2ad9cc ("oprofile/x86: Convert x86_backtrace() to use the new unwinder") Link: http://lkml.kernel.org/r/be08d589f6474df78364e081c42777e382af9352.1475731632.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Russell King 提交于
Commit 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") tried to increase the bogomips limitation, but in doing so messed up udelay such that it always gives about a 5% error in the delay, even if we use a timer. The calculation is: loops = UDELAY_MULT * us_delay * ticks_per_jiffy >> UDELAY_SHIFT Originally, UDELAY_MULT was ((UL(2199023) * HZ) >> 11) and UDELAY_SHIFT 30. Assuming HZ=100, us_delay of 1000 and ticks_per_jiffy of 1660000 (eg, 166MHz timer, 1ms delay) this would calculate: ((UL(2199023) * HZ) >> 11) * 1000 * 1660000 >> 30 => 165999 With the new values of 2047 * HZ + 483648 * HZ / 1000000 and 31, we get: (2047 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 158269 which is incorrect. This is due to a typo - correcting it gives: (2147 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31 => 165999 i.o.w, the original value. Fixes: 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation") Cc: <stable@vger.kernel.org> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 netmonk@netmonk.org 提交于
Good evening, Following LinuxCodingStyle documentation and with the help of Sam, fixed severals identation issues in the code, and few others cosmetic changes And last and i hope least fixing my name :) Signed-off-by : Dominique Carrel <netmonk@netmonk.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 chris hyser 提交于
Enable relaxed ordering for memory writes in IOMMU TSB entry from dma_4v_alloc_coherent(), dma_4v_map_page() and dma_4v_map_sg() when dma_attrs DMA_ATTR_WEAK_ORDERING is set. This requires PCI IOMMU I/O Translation Services version 2.0 API. Many PCIe devices allow enabling relaxed-ordering (memory writes bypassing other memory writes) for various DMA buffers. A notable exception is the Mellanox mlx4 IB adapter. Due to the nature of x86 HW this appears to have little performance impact there. On SPARC HW however, this results in major performance degradation getting only about 3Gbps. Enabling RO in the IOMMU entries corresponding to mlx4 data buffers increases the throughput to about 13 Gbps. Orabug: 19245907 Signed-off-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 chris hyser 提交于
Enable Version 2 of the PCI IOMMU API needed for advanced features such as PCI Relaxed Ordering and greater than 2 GB DMA address space per root complex. Signed-off-by: NChris Hyser <chris.hyser@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paul Gortmaker 提交于
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Helge Deller 提交于
Add ENTRY_CFI() and ENDPROC_CFI() macros for dwarf debug info and convert assembly users to new macros. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
Do not reserve space in data section for hpmc stack, instead move it into the page aligned bss section. Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 05 10月, 2016 4 次提交
-
-
由 Renat Valiullin 提交于
In a virtualized environment the APIC timer calibration can go wrong when the host is overcommitted or the guest is running nested. This results in the APIC timers operating at an incorrect frequency. Since VMware supports a mechanism to retrieve the local APIC frequency we can ask the hypervisor for it and skip the APIC calibration loop. Signed-off-by: NRenat Valiullin <rvaliullin@vmware.com> Acked-by: NAlok N Kataria <akataria@vmware.com> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161004201148.GA1421@uu64vmSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
Signed-off-by: NSylvain BERTRAND <sylvain.bertrand@gmail.com> Link: http://lkml.kernel.org/r/20160929162234.GA29592@freedomSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Boris Ostrovsky 提交于
xen_cpuhp_setup() calls mutex_lock() which, when CONFIG_DEBUG_MUTEXES is defined, ends up calling xen_save_fl(). That routine expects per_cpu(xen_vcpu, 0) to be already initialized. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reported-by: NSander Eikelenboom <linux@eikelenboom.it> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Keith Busch 提交于
Move the driver source and Kconfig to the PCI host bridge drivers directory and move the config option to a more appropriate sub-menu instead of occupying the top-level location. Update the Kconfig option with the X86_64 dependency that was implicitly included from the previous location, and add information about the module name when built as a loadable module. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> CC: Jon Derrick <jonathan.derrick@intel.com>
-
- 04 10月, 2016 7 次提交
-
-
由 Mika Westerberg 提交于
When a CPU is about to be offlined we call fixup_irqs() that resets IRQ affinities related to the CPU in question. The same thing is also done when the system is suspended to S-states like S3 (mem). For each IRQ we try to complete any on-going move regardless whether the IRQ is actually part of x86_vector_domain. For each IRQ descriptor we fetch its chip_data, assume it is of type struct apic_chip_data and manipulate it by clearing old_domain mask etc. For irq_chips that are not part of the x86_vector_domain, like those created by various GPIO drivers, will find their chip_data being changed unexpectly. Below is an example where GPIO chip owned by pinctrl-sunrisepoint.c gets corrupted after resume: # cat /sys/kernel/debug/gpio gpiochip0: GPIOs 360-511, parent: platform/INT344B:00, INT344B:00: gpio-511 ( |sysfs ) in hi # rtcwake -s10 -mmem <10 seconds passes> # cat /sys/kernel/debug/gpio gpiochip0: GPIOs 360-511, parent: platform/INT344B:00, INT344B:00: gpio-511 ( |sysfs ) in ? Note '?' in the output. It means the struct gpio_chip ->get function is NULL whereas before suspend it was there. Fix this by first checking that the IRQ belongs to x86_vector_domain before we try to use the chip_data as struct apic_chip_data. Reported-and-tested-by: NSakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Cc: stable@vger.kernel.org # 4.4+ Link: http://lkml.kernel.org/r/20161003101708.34795-1-mika.westerberg@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Naveen N. Rao 提交于
In line with similar support for other architectures by Daniel Borkmann. 'MOD Default X' from test_bpf without constant blinding: 84 bytes emitted from JIT compiler (pass:3, flen:7) d0000000058a4688 + <x>: 0: nop 4: nop 8: std r27,-40(r1) c: std r28,-32(r1) 10: xor r8,r8,r8 14: xor r28,r28,r28 18: mr r27,r3 1c: li r8,66 20: cmpwi r28,0 24: bne 0x0000000000000030 28: li r8,0 2c: b 0x0000000000000044 30: divwu r9,r8,r28 34: mullw r9,r28,r9 38: subf r8,r9,r8 3c: rotlwi r8,r8,0 40: li r8,66 44: ld r27,-40(r1) 48: ld r28,-32(r1) 4c: mr r3,r8 50: blr ... and with constant blinding: 140 bytes emitted from JIT compiler (pass:3, flen:11) d00000000bd6ab24 + <x>: 0: nop 4: nop 8: std r27,-40(r1) c: std r28,-32(r1) 10: xor r8,r8,r8 14: xor r28,r28,r28 18: mr r27,r3 1c: lis r2,-22834 20: ori r2,r2,36083 24: rotlwi r2,r2,0 28: xori r2,r2,36017 2c: xoris r2,r2,42702 30: rotlwi r2,r2,0 34: mr r8,r2 38: rotlwi r8,r8,0 3c: cmpwi r28,0 40: bne 0x000000000000004c 44: li r8,0 48: b 0x000000000000007c 4c: divwu r9,r8,r28 50: mullw r9,r28,r9 54: subf r8,r9,r8 58: rotlwi r8,r8,0 5c: lis r2,-17137 60: ori r2,r2,39065 64: rotlwi r2,r2,0 68: xori r2,r2,39131 6c: xoris r2,r2,48399 70: rotlwi r2,r2,0 74: mr r8,r2 78: rotlwi r8,r8,0 7c: ld r27,-40(r1) 80: ld r28,-32(r1) 84: mr r3,r8 88: blr Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
Tail calls allow JIT'ed eBPF programs to call into other JIT'ed eBPF programs. This can be achieved either by: (1) retaining the stack setup by the first eBPF program and having all subsequent eBPF programs re-using it, or, (2) by unwinding/tearing down the stack and having each eBPF program deal with its own stack as it sees fit. To ensure that this does not create loops, there is a limit to how many tail calls can be done (currently 32). This requires the JIT'ed code to maintain a count of the number of tail calls done so far. Approach (1) is simple, but requires every eBPF program to have (almost) the same prologue/epilogue, regardless of whether they need it. This is inefficient for small eBPF programs which may not sometimes need a prologue at all. As such, to minimize impact of tail call implementation, we use approach (2) here which needs each eBPF program in the chain to use its own prologue/epilogue. This is not ideal when many tail calls are involved and when all the eBPF programs in the chain have similar prologue/epilogue. However, the impact is restricted to programs that do tail calls. Individual eBPF programs are not affected. We maintain the tail call count in a fixed location on the stack and updated tail call count values are passed in through this. The very first eBPF program in a chain sets this up to 0 (the first 2 instructions). Subsequent tail calls skip the first two eBPF JIT instructions to maintain the count. For programs that don't do tail calls themselves, the first two instructions are NOPs. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
While at it, ensure that the location of the local save area is consistent whether or not we setup our own stackframe. This property is utilised in the next patch that adds support for tail calls. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The fadump code calls vmcore_cleanup() which only exists if CONFIG_PROC_VMCORE=y. We don't want to depend on CONFIG_PROC_VMCORE, because it's user selectable, so just wrap the call in an #ifdef. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Cyril Bur 提交于
Currently the MSR TM bit is always set if the hardware is TM capable. This adds extra overhead as it means the TM SPRS (TFHAR, TEXASR and TFAIR) must be swapped for each process regardless of if they use TM. For processes that don't use TM the TM MSR bit can be turned off allowing the kernel to avoid the expensive swap of the TM registers. A TM unavailable exception will occur if a thread does use TM and the kernel will enable MSR_TM and leave it so for some time afterwards. Signed-off-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Cyril Bur 提交于
If the kernel disables transactional memory (TM) and userspace still tries TM related actions (TM instructions or TM SPR accesses) TM aware hardware will cause the kernel to take a facility unavailable exception. Add checks for the exception being caused by illegal TM access in userspace. Signed-off-by: NCyril Bur <cyrilbur@gmail.com> [mpe: Rewrite comment entirely, bugs in it are mine] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-