- 02 9月, 2020 40 次提交
-
-
由 Peng Wang 提交于
to #29722367 On some specific hardware with 128 bytes LLC cacheline, tk_core may cause false sharing problem. We can align it to 128 bytes so that it won't be affected by other global variables. This change will make a bit waste on cache utilization but get good number of performance improvement. So for both 64 and 128 bytes aligned LLC cacheline, we adjust tk_core memory layout to avoid potential cacheline contention. Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Acked-by: NShanpei Chen <shanpeic@linux.alibaba.com> Reviewed-by: NShile Zhang <shile.zhang@linux.alibaba.com>
-
由 Thomas Gleixner 提交于
fix #29760792 commit 94a46d316f2b54e3de8a4fa884cb16383db7fcd8 upstream There is no reason to have nmi_enter/exit() in the actual MCE handlers. Move it to the entry point. This also covers the until now uncovered initial handler which only prints. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NAndy Lutomirski <luto@kernel.org> Link: https://lkml.kernel.org/r/20200505135314.243936614@linutronix.deSigned-off-by: NZelin Deng <zelin.deng@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Dave Hansen 提交于
to #28718400 commit 5a28fc94c9143db766d1ba5480cae82d856ad080 upstream. This is a bit of a mess, to put it mildly. But, it's a bug that only seems to have showed up in 4.20 but wasn't noticed until now, because nobody uses MPX. MPX has the arch_unmap() hook inside of munmap() because MPX uses bounds tables that protect other areas of memory. When memory is unmapped, there is also a need to unmap the MPX bounds tables. Barring this, unused bounds tables can eat 80% of the address space. But, the recursive do_munmap() that gets called vi arch_unmap() wreaks havoc with __do_munmap()'s state. It can result in freeing populated page tables, accessing bogus VMA state, double-freed VMAs and more. See the "long story" further below for the gory details. To fix this, call arch_unmap() before __do_unmap() has a chance to do anything meaningful. Also, remove the 'vma' argument and force the MPX code to do its own, independent VMA lookup. == UML / unicore32 impact == Remove unused 'vma' argument to arch_unmap(). No functional change. I compile tested this on UML but not unicore32. == powerpc impact == powerpc uses arch_unmap() well to watch for munmap() on the VDSO and zeroes out 'current->mm->context.vdso_base'. Moving arch_unmap() makes this happen earlier in __do_munmap(). But, 'vdso_base' seems to only be used in perf and in the signal delivery that happens near the return to userspace. I can not find any likely impact to powerpc, other than the zeroing happening a little earlier. powerpc does not use the 'vma' argument and is unaffected by its removal. I compile-tested a 64-bit powerpc defconfig. == x86 impact == For the common success case this is functionally identical to what was there before. For the munmap() failure case, it's possible that some MPX tables will be zapped for memory that continues to be in use. But, this is an extraordinarily unlikely scenario and the harm would be that MPX provides no protection since the bounds table got reset (zeroed). I can't imagine anyone doing this: ptr = mmap(); // use ptr ret = munmap(ptr); if (ret) // oh, there was an error, I'll // keep using ptr. Because if you're doing munmap(), you are *done* with the memory. There's probably no good data in there _anyway_. This passes the original reproducer from Richard Biener as well as the existing mpx selftests/. The long story: munmap() has a couple of pieces: 1. Find the affected VMA(s) 2. Split the start/end one(s) if neceesary 3. Pull the VMAs out of the rbtree 4. Actually zap the memory via unmap_region(), including freeing page tables (or queueing them to be freed). 5. Fix up some of the accounting (like fput()) and actually free the VMA itself. This specific ordering was actually introduced by: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") during the 4.20 merge window. The previous __do_munmap() code was actually safe because the only thing after arch_unmap() was remove_vma_list(). arch_unmap() could not see 'vma' in the rbtree because it was detached, so it is not even capable of doing operations unsafe for remove_vma_list()'s use of 'vma'. Richard Biener reported a test that shows this in dmesg: [1216548.787498] BUG: Bad rss-counter state mm:0000000017ce560b idx:1 val:551 [1216548.787500] BUG: non-zero pgtables_bytes on freeing mm: 24576 What triggered this was the recursive do_munmap() called via arch_unmap(). It was freeing page tables that has not been properly zapped. But, the problem was bigger than this. For one, arch_unmap() can free VMAs. But, the calling __do_munmap() has variables that *point* to VMAs and obviously can't handle them just getting freed while the pointer is still in use. I tried a couple of things here. First, I tried to fix the page table freeing problem in isolation, but I then found the VMA issue. I also tried having the MPX code return a flag if it modified the rbtree which would force __do_munmap() to re-walk to restart. That spiralled out of control in complexity pretty fast. Just moving arch_unmap() and accepting that the bonkers failure case might eat some bounds tables seems like the simplest viable fix. This was also reported in the following kernel bugzilla entry: https://bugzilla.kernel.org/show_bug.cgi?id=203123 There are some reports that this commit triggered this bug: 3ee4347a3fb3 ("mm: mmap: zap pages with read mmap_sem in munmap") While that commit certainly made the issues easier to hit, I believe the fundamental issue has been with us as long as MPX itself, thus the Fixes: tag below is for one of the original MPX commits. [ mingo: Minor edits to the changelog and the patch. ] Reported-by: NRichard Biener <rguenther@suse.de> Reported-by: NH.J. Lu <hjl.tools@gmail.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Reviewed-by Thomas Gleixner <tglx@linutronix.de> Reviewed-by: NYang Shi <yang.shi@linux.alibaba.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-um@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: stable@vger.kernel.org Fixes: 3ee4347a3fb3 ("mm: mmap: zap pages with read mmap_sem in munmap") Link: http://lkml.kernel.org/r/20190419194747.5E1AD6DC@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> [xuyu: resolve conflicts in arch/x86/include/asm/mpx.h] Signed-off-by: NXu Yu <xuyu@linux.alibaba.com> Acked-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Borislav Petkov 提交于
fix #29415191 commit 1df73b2131e3b33d518609769636b41ce00212de upstream the severity grading code returns in_kernel_recov error context for errors which have happened in kernel space but from which the kernel can recover. whether the recovery can happen is determined by the exception table entry having as handler ex_handler_fault() and which has been declared at build time using _asm_extable_fault(). in_kernel_recov is used in mce_severity_intel() to lookup the corresponding error severity in the severities table. however, the mapping back from error severity to whether the error is in_kernel_recov is ambiguous and in the very paranoid case - which might not be possible right now - but be better safe than sorry later, an exception fixup could be attempted for another mce whose address is in the exception table and has the proper severity. which would be unfortunate, to say the least. therefore, mark such mces explicitly as mce_in_kernel_recov so that the recovery attempt is done only for them. document the whole handling, while at it, as it is not trivial. reported-by: Nthomas gleixner <tglx@linutronix.de> signed-off-by: Nborislav petkov <bp@suse.de> tested-by: Ntony luck <tony.luck@intel.com> link: https://lkml.kernel.org/r/20200407163414.18058-10-bp@alien8.deSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Tony Luck 提交于
fix #29415191 commit 43505646941bee217b91d064756975aa1ab6ee3b upstream Sometimes, when logs are getting lost, it's nice to just have everything dumped to the serial console. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200214222720.13168-7-tony.luck@intel.comSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Tony Luck 提交于
fix #29415191 commit 925946cfa715a5a71639528f82b98e58f14dd4cb upstream Instead of keeping count of how many handlers are registered on the MCE notifier chain and printing if below some magic value, look at mce->kflags to see if anyone claims to have handled/logged this error. [ bp: Do not print ->kflags in __print_mce(). ] Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200214222720.13168-6-tony.luck@intel.comSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Tony Luck 提交于
fix #29415191 commit 23ba710a0864108910c7531dc4c73ef65eca5568 upstream If the handler took any action to log or deal with the error, set a bit in mce->kflags so that the default handler on the end of the machine check chain can see what has been done. Get rid of NOTIFY_STOP returns. Make the EDAC and dev-mcelog handlers skip over errors already processed by CEC. Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200214222720.13168-5-tony.luck@intel.comSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Tony Luck 提交于
fix #29415191 commit 1de08dccd383482a3e88845d3554094d338f5ff9 upstream There can be many different subsystems register on the mce handler chain. Add a new bitmask field and define values so that handlers can indicate whether they took any action to log or otherwise handle an error. The default handler at the end of the chain can use this information to decide whether to print to the console log. Boris suggested a generic name and leaving plenty of spare bits for possible future use. [ bp: Move flag bits to the internal mce.h header and use BIT_ULL(). ] Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200214222720.13168-4-tony.luck@intel.comSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Peter Zijlstra 提交于
fix #29415191 commit 0d00449c7a28a1514595630735df383dec606812 upstream A few exceptions (like #DB and #BP) can happen at any location in the code, this then means that tracers should treat events from these exceptions as NMI-like. The interrupted context could be holding locks with interrupts disabled for instance. Similarly, #MC is an actual NMI-like exception. All of them use ist_enter() which only concerns itself with RCU, but does not do any of the other setup that NMIs need. This means things like: printk() raw_spin_lock_irq(&logbuf_lock); <#DB/#BP/#MC> printk() raw_spin_lock_irq(&logbuf_lock); are entirely possible (well, not really since printk tries hard to play nice, but the concept stands). So replace ist_enter() with nmi_enter(). Also observe that any nmi_enter() caller must be both notrace and NOKPROBE, or in the noinstr text section. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.525508608@linutronix.deSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Peter Zijlstra 提交于
fix #29415191 commit 5567d11c21a1d508a91a8cb64a819783a0835d9f upstream Convert #MC over to using task_work_add(); it will run the same code slightly later, on the return to user path of the same exception. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NFrederic Weisbecker <frederic@kernel.org> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134100.957390899@linutronix.deSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Youquan Song 提交于
fix #29415191 commit b052df3da821adfd6be26a6eb16624fb50e90e56 upstream This is completely overengineered and definitely not an interface which should be made available to anything else than this particular MCE case. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134059.462640294@linutronix.deSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Tony Luck 提交于
fix #29415191 commit 17fae1294ad9d711b2c3dd0edef479d40c76a5e8 upstream An interesting thing happened when a guest Linux instance took a machine check. The VMM unmapped the bad page from guest physical space and passed the machine check to the guest. Linux took all the normal actions to offline the page from the process that was using it. But then guest Linux crashed because it said there was a second machine check inside the kernel with this stack trace: do_memory_failure set_mce_nospec set_memory_uc _set_memory_uc change_page_attr_set_clr cpa_flush clflush_cache_range_opt This was odd, because a CLFLUSH instruction shouldn't raise a machine check (it isn't consuming the data). Further investigation showed that the VMM had passed in another machine check because is appeared that the guest was accessing the bad page. Fix is to check the scope of the poison by checking the MCi_MISC register. If the entire page is affected, then unmap the page. If only part of the page is affected, then mark the page as uncacheable. This assumes that VMMs will do the logical thing and pass in the "whole page scope" via the MCi_MISC register (since they unmapped the entire page). [ bp: Adjust to x86/entry changes. ] Fixes: 284ce401 ("x86/memory_failure: Introduce {set, clear}_mce_nospec()") Reported-by: NJue Wang <juew@google.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJue Wang <juew@google.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200520163546.GA7977@agluck-desk2.amr.corp.intel.comSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NWetp Zhang <wetp.zy@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Masahiro Yamada 提交于
task #29499913 commit d198b34f3855eee2571dda03eea75a09c7c31480 upstream Add SPDX License Identifier to all .gitignore files. erwei: Because the locations of the .gitignore files from the upstream are different from our kernel, I add the context "# SPDX-License-Identifier: GPL-2.0-only" to our onw .gitignore files. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NErwei Deng <erwei@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Babu Moger 提交于
fix #29429936 commit 9d40b85bb46a99bc95dad3a07787da93b0a018e9 upstream AMD Zen 2 introduces a new RDPRU instruction which is used to give access to some processor registers that are typically only accessible when the privilege level is zero. ECX is used as the implicit register to specify which register to read. RDPRU places the specified register’s value into EDX:EAX. For example, the RDPRU instruction can be used to read MPERF and APERF at CPL > 0. Add the feature bit so it is visible in /proc/cpuinfo. Details are available in the AMD64 Architecture Programmer’s Manual: https://www.amd.com/system/files/TechDocs/24594.pdfSigned-off-by: NBabu Moger <babu.moger@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Aaron Lewis <aaronlewis@google.com> Cc: ak@linux.intel.com Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: robert.hu@linux.intel.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191007204839.5727.10803.stgit@localhost.localdomainSigned-off-by: NArtie Ding <artie.ding@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Babu Moger 提交于
fix #29035143 commit 2c18bd525c47f882f033b0a813ecd09c93e1ecdf upstream Memory bandwidth is calculated reading the monitoring counter at two intervals and calculating the delta. It is the software’s responsibility to read the count often enough to avoid having the count roll over _twice_ between reads. The current code hardcodes the bandwidth monitoring counter's width to 24 bits for AMD. This is due to default base counter width which is 24. Currently, AMD does not implement the CPUID 0xF.[ECX=1]:EAX to adjust the counter width. But, the AMD hardware supports much wider bandwidth counter with the default width of 44 bits. Kernel reads these monitoring counters every 1 second and adjusts the counter value for overflow. With 24 bits and scale value of 64 for AMD, it can only measure up to 1GB/s without overflowing. For the rates above 1GB/s this will fail to measure the bandwidth. Fix the issue setting the default width to 44 bits by adjusting the offset. AMD future products will implement CPUID 0xF.[ECX=1]:EAX. [ bp: Let the line stick out and drop {}-brackets around a single statement. ] Fixes: 4d05bf71f157 ("x86/resctrl: Introduce AMD QOS feature") Signed-off-by: NBabu Moger <babu.moger@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/159129975546.62538.5656031125604254041.stgit@naples-babu.amd.comSigned-off-by: NZelin Deng <zelin.deng@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Reinette Chatre 提交于
fix #29035143 commit f3d44f18b0662327c42128b9d3604489bdb6e36f upstream The original Memory Bandwidth Monitoring (MBM) architectural definition defines counters of up to 62 bits in the IA32_QM_CTR MSR while the first-generation MBM implementation uses statically defined 24 bit counters. Expand the MBM CPUID enumeration properties to include the MBM counter width. The previously undefined EAX output register contains, in bits [7:0], the MBM counter width encoded as an offset from 24 bits. Enumerating this property is only specified for Intel CPUs. Suggested-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NReinette Chatre <reinette.chatre@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/afa3af2f753f6bc301fb743bc8944e749cb24afa.1588715690.git.reinette.chatre@intel.comSigned-off-by: NZelin Deng <zelin.deng@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Reinette Chatre 提交于
fix #29035143 commit 0118ad82c2a64ebcf15d7565ed35361407efadfa upstream The function determining a platform's support and properties of cache occupancy and memory bandwidth monitoring (properties of X86_FEATURE_CQM_LLC) can be found among the common CPU code. After the feature's properties is populated in the per-CPU data the resctrl subsystem is the only consumer (via boot_cpu_data). Move the function that obtains the CPU information used by resctrl to the resctrl subsystem and rename it from init_cqm() to resctrl_cpu_detect(). The function continues to be called from the common CPU code. This move is done in preparation of the addition of some vendor specific code. No functional change. Suggested-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NReinette Chatre <reinette.chatre@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/38433b99f9d16c8f4ee796f8cc42b871531fa203.1588715690.git.reinette.chatre@intel.comSigned-off-by: NZelin Deng <zelin.deng@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Reinette Chatre 提交于
fix #29035143 commit 8dd97c65185c5a63c668e5bd8a861c04f47a35ed upstream asm/resctrl_sched.h is dedicated to the code used for configuration of the CPU resource control state when a task is scheduled. Rename resctrl_sched.h to resctrl.h in preparation of additions that will no longer make this file dedicated to work done during scheduling. No functional change. Suggested-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NReinette Chatre <reinette.chatre@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/6914e0ef880b539a82a6d889f9423496d471ad1d.1588715690.git.reinette.chatre@intel.comSigned-off-by: NZelin Deng <zelin.deng@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit e48667b865480d8bf0f1171a8b474ffc785b9ace upstream Family 19h introduces change in slice, core and thread specification in its L3 Performance Event Select (ChL3PmcCfg) h/w register. The change is incompatible with Family 17h's version of the register. Introduce a new path in l3_thread_slice_mask() to do things differently for Family 19h vs. Family 17h, otherwise the new hardware doesn't get programmed correctly. Instead of a linear core--thread bitmask, Family 19h takes an encoded core number, and a separate thread mask. There are new bits that are set for all cores and all slices, of which only the latter is used, since the driver counts events for all slices on behalf of the specified CPU. Also update amd_uncore_init() to base its L2/NB vs. L3/Data Fabric mode decision based on Family 17h or above, not just 17h and 18h: the Family 19h Data Fabric PMC is compatible with the Family 17h DF PMC. [ bp: Touchups. ] Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200313231024.17601-3-kim.phillips@amd.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit 9689dbbeaea884d19e3085439c6a247ef986b2af upstream Convert the l3_thread_slice_mask() function to use the more readable topology_* helper functions, more intuitive variable names like shift and thread_mask, and BIT_ULL(). No functional changes. Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200313231024.17601-2-kim.phillips@amd.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit 4dcc3df82573a946c620dda5fb00e27c7b080105 upstream In order to better accommodate the upcoming Family 19h, given the 80-char line limit, move the existing code into a new l3_thread_slice_mask() function. No functional changes. [ bp: Touchups. ] Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200313231024.17601-1-kim.phillips@amd.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit 753039ef8b2f1078e5bff8cd42f80578bf6385b0 upstream Family 19h CPUs are Zen-based and still share most architectural features with Family 17h CPUs, and therefore still need to call init_amd_zn() e.g., to set the RECLAIM_DISTANCE override. init_amd_zn() also sets X86_FEATURE_ZEN, which today is only used in amd_set_core_ssb_state(), which isn't called on some late model Family 17h CPUs, nor on any Family 19h CPUs: X86_FEATURE_AMD_SSBD replaces X86_FEATURE_LS_CFG_SSBD on those later model CPUs, where the SSBD mitigation is done via the SPEC_CTRL MSR instead of the LS_CFG MSR. Family 19h CPUs also don't have the erratum where the CPB feature bit isn't set, but that code can stay unchanged and run safely on Family 19h. Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200311191451.13221-1-kim.phillips@amd.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit 5738891229a25e9e678122a843cbf0466a456d0c upstream Description of hardware operation --------------------------------- The core AMD PMU has a 4-bit wide per-cycle increment for each performance monitor counter. That works for most events, but now with AMD Family 17h and above processors, some events can occur more than 15 times in a cycle. Those events are called "Large Increment per Cycle" events. In order to count these events, two adjacent h/w PMCs get their count signals merged to form 8 bits per cycle total. In addition, the PERF_CTR count registers are merged to be able to count up to 64 bits. Normally, events like instructions retired, get programmed on a single counter like so: PERF_CTL0 (MSR 0xc0010200) 0x000000000053ff0c # event 0x0c, umask 0xff PERF_CTR0 (MSR 0xc0010201) 0x0000800000000001 # r/w 48-bit count The next counter at MSRs 0xc0010202-3 remains unused, or can be used independently to count something else. When counting Large Increment per Cycle events, such as FLOPs, however, we now have to reserve the next counter and program the PERF_CTL (config) register with the Merge event (0xFFF), like so: PERF_CTL0 (msr 0xc0010200) 0x000000000053ff03 # FLOPs event, umask 0xff PERF_CTR0 (msr 0xc0010201) 0x0000800000000001 # rd 64-bit cnt, wr lo 48b PERF_CTL1 (msr 0xc0010202) 0x0000000f004000ff # Merge event, enable bit PERF_CTR1 (msr 0xc0010203) 0x0000000000000000 # wr hi 16-bits count The count is widened from the normal 48-bits to 64 bits by having the second counter carry the higher 16 bits of the count in its lower 16 bits of its counter register. The odd counter, e.g., PERF_CTL1, is programmed with the enabled Merge event before the even counter, PERF_CTL0. The Large Increment feature is available starting with Family 17h. For more details, search any Family 17h PPR for the "Large Increment per Cycle Events" section, e.g., section 2.1.15.3 on p. 173 in this version: https://www.amd.com/system/files/TechDocs/56176_ppr_Family_17h_Model_71h_B0_pub_Rev_3.06.zip Description of software operation --------------------------------- The following steps are taken in order to support reserving and enabling the extra counter for Large Increment per Cycle events: 1. In the main x86 scheduler, we reduce the number of available counters by the number of Large Increment per Cycle events being scheduled, tracked by a new cpuc variable 'n_pair' and a new amd_put_event_constraints_f17h(). This improves the counter scheduler success rate. 2. In perf_assign_events(), if a counter is assigned to a Large Increment event, we increment the current counter variable, so the counter used for the Merge event is removed from assignment consideration by upcoming event assignments. 3. In find_counter(), if a counter has been found for the Large Increment event, we set the next counter as used, to prevent other events from using it. 4. We perform steps 2 & 3 also in the x86 scheduler fastpath, i.e., we add Merge event accounting to the existing used_mask logic. 5. Finally, we add on the programming of Merge event to the neighbouring PMC counters in the counter enable/disable{_all} code paths. Currently, software does not support a single PMU with mixed 48- and 64-bit counting, so Large increment event counts are limited to 48 bits. In set_period, we zero-out the upper 16 bits of the count, so the hardware doesn't copy them to the even counter's higher bits. Simple invocation example showing counting 8 FLOPs per 256-bit/%ymm vaddps instruction executed in a loop 100 million times: perf stat -e cpu/fp_ret_sse_avx_ops.all/,cpu/instructions/ <workload> Performance counter stats for '<workload>': 800,000,000 cpu/fp_ret_sse_avx_ops.all/u 300,042,101 cpu/instructions/u Prior to this patch, the reported SSE/AVX FLOPs retired count would be wrong. [peterz: lots of renames and edits to the code] Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kim Phillips 提交于
fix #29035100 commit 471af006a747f1c535c8a8c6c0973c320fe01b22 upstream AMD Family 17h processors and above gain support for Large Increment per Cycle events. Unfortunately there is no CPUID or equivalent bit that indicates whether the feature exists or not, so we continue to determine eligibility based on a CPU family number comparison. For Large Increment per Cycle events, we add a f17h-and-compatibles get_event_constraints_f17h() that returns an even counter bitmask: Large Increment per Cycle events can only be placed on PMCs 0, 2, and 4 out of the currently available 0-5. The only currently public event that requires this feature to report valid counts is PMCx003 "Retired SSE/AVX Operations". Note that the CPU family logic in amd_core_pmu_init() is changed so as to be able to selectively add initialization for features available in ranges of backward-compatible CPU families. This Large Increment per Cycle feature is expected to be retained in future families. A side-effect of assigning a new get_constraints function for f17h disables calling the old (prior to f15h) amd_get_event_constraints implementation left enabled by commit e40ed154 ("perf/x86: Add perf support for AMD family-17h processors"), which is no longer necessary since those North Bridge event codes are obsoleted. Also fix a spelling mistake whilst in the area (calulating -> calculating). Fixes: e40ed154 ("perf/x86: Add perf support for AMD family-17h processors") Signed-off-by: NKim Phillips <kim.phillips@amd.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191114183720.19887-2-kim.phillips@amd.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Reinette Chatre 提交于
fix #29035100 commit 1182a49529edde899be4b4f0e1ab76e626976eb6 upstream perf_event_read_local() is the safest way to obtain measurements associated with performance events. In some cases the overhead introduced by perf_event_read_local() affects the measurements and the use of rdpmcl() is needed. rdpmcl() requires the index of the performance counter used so a helper is introduced to determine the index used by a provided performance event. The index used by a performance event may change when interrupts are enabled. A check is added to ensure that the index is only accessed with interrupts disabled. Even with this check the use of this counter needs to be done with care to ensure it is queried and used within the same disabled interrupts section. This change introduces a new checkpatch warning: CHECK: extern prototypes should be avoided in .h files +extern int x86_perf_rdpmc_index(struct perf_event *event); This warning was discussed and designated as a false positive in http://lkml.kernel.org/r/20180919091759.GZ24124@hirez.programming.kicks-ass.netSuggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NReinette Chatre <reinette.chatre@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: acme@kernel.org Cc: gavin.hindman@intel.com Cc: jithu.joseph@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/b277ffa78a51254f5414f7b1bc1923826874566e.1537377064.git.reinette.chatre@intel.comSigned-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit 2b3b76b5ec67568da4bb475d3ce8a92ef494b5de upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. The uncore subsystem in Ice Lake server is similar to previous server. There are some differences in config register encoding and pci device IDs. The uncore PMON units in Ice Lake server include Ubox, Chabox, IIO, IRP, M2PCIE, PCU, M2M, PCIE3 and IMC. - For CHA, filter 1 register has been removed. The filter 0 register can be used by and of CHA events to be filterd by Thread/Core-ID. To do so, the control register's tid_en bit must be set to 1. - For IIO, there are some changes on event constraints. The MSR address and MSR offsets among counters are also changed. - For IRP, the MSR address and MSR offsets among counters are changed. - For M2PCIE, the counters are accessed by MSR now. Add new MSR address and MSR offsets. Change event constraints. - To determine the number of CHAs, have to read CAPID6(Low) and CAPID7 (High) now. - For M2M, update the PCICFG address and Device ID. - For UPI, update the PCICFG address, Device ID and counter address. - For M3UPI, update the PCICFG address, Device ID, counter address and event constraints. - For IMC, update the formular to calculate MMIO BAR address, which is MMIO_BASE + specific MEM_BAR offset. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/1585842411-150452-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit bc88a2fe216a51e8ab46d61f89d0c1b5a400470e upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. The offset between uncore boxes of free-running counters varies, e.g. IIO free-running counters on Ice Lake server. Add box_offsets, an array of offsets between adjacent uncore boxes. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/1584470314-46657-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit 3442a9ecb8e72a33c28a2b969b766c659830e410 upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. The IMC uncore unit in Ice Lake server can only be accessed by MMIO, which is similar as Snow Ridge. Factor out __snr_uncore_mmio_init_box which can be shared with Ice Lake server in the following patch. No functional changes. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/1584470314-46657-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit ee49532b38dd084650bf715eabe7e3828fb8d275 upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. IMC uncore unit can only be accessed via MMIO on Snow Ridge. The MMIO space of IMC uncore is at the specified offsets from the MEM0_BAR. Add snr_uncore_get_mc_dev() to locate the PCI device with MMIO_BASE and MEM0_BAR register. Add new ops to access the IMC registers via MMIO. Add 3 new free running counters for clocks, read and write bandwidth. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Link: https://lkml.kernel.org/r/1556672028-119221-7-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit 07ce734dd8adc0f170d43c15a9b91b707a21b9d7 upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. The client IMC block is accessed by MMIO. Current code uses an informal way to access the block, which is not recommended. Clean up the code by using __iomem annotation and the accessor functions (read[lq]()). Move exit_box() and read_counter() to generic code, which can be shared with the server code later. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Link: https://lkml.kernel.org/r/1556672028-119221-6-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit 3da04b8a00dd6d39970b9e764b78c5dfb40ec013 upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. A new MMIO type uncore box is introduced on Snow Ridge server. The counters of MMIO type uncore box can only be accessed by MMIO. Add a new uncore type, uncore_mmio_uncores, for MMIO type uncore blocks. Support MMIO type uncore blocks in CPU hot plug. The MMIO space has to be map/unmap for the first/last CPU. The context also need to be migrated if the bind CPU changes. Add mmio_init() to init and register PMUs for MMIO type uncore blocks. Add a helper to calculate the box_ctl address. The helpers which calculate ctl/ctr can be shared with PCI type uncore blocks. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Link: https://lkml.kernel.org/r/1556672028-119221-5-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit c8872d90e0a3651a096860d3241625ccfa1647e0 upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. For uncore box which can only be accessed by MSR, its reference box->refcnt is updated in CPU hot plug. The uncore boxes need to be initalized and exited accordingly for the first/last CPU of a socket. Starts from Snow Ridge server, a new type of uncore box is introduced, which can only be accessed by MMIO. The driver needs to map/unmap MMIO space for the first/last CPU of a socket. Extract the codes of box ref/unref and init/exit for reuse later. There is no functional change. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Link: https://lkml.kernel.org/r/1556672028-119221-4-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Kan Liang 提交于
fix #29130534 commit 210cc5f9db7a5c66b7ca6290b7d35cc7db7e9dbd upstream. Backport summary: Backport to kernel 4.19.57 for ICX uncore support. The uncore subsystem on Snow Ridge is similar as previous SKX server. The uncore units on Snow Ridge include Ubox, Chabox, IIO, IRP, M2PCIE, PCU, M2M, PCIE3 and IMC. - The config register encoding and pci device IDs are changed. - For CHA, the umask_ext and filter_tid fields are changed. - For IIO, the ch_mask and fc_mask fields are changed. - For M2M, the mask_ext field is changed. - Add new PCIe3 unit for PCIe3 root port which provides the interface between PCIe devices, plugged into the PCIe port, and the components (in M2IOSF). - IMC can only be accessed via MMIO on Snow Ridge now. Current common code doesn't support it yet. IMC will be supported in following patches. - There are 9 free running counters for IIO CLOCKS and bandwidth In. - Full uncore event list is not published yet. Event constrain is not included in this patch. It will be added later separately. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: eranian@google.com Link: https://lkml.kernel.org/r/1556672028-119221-3-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NPeng Wang <rocking@linux.alibaba.com> Reviewed-by: NArtie Ding <artie.ding@linux.alibaba.com>
-
由 Jeremy Linton 提交于
fix #26734090 commit d24a0c7099b32b6981d7f126c45348e381718350 upstream ACPI 6.3 adds additional fields to the MADT GICC structure to describe SPE PPI's. We pick these out of the cached reference to the madt_gicc structure similarly to the core PMU code. We then create a platform device referring to the IRQ and let the user/module loader decide whether to load the SPE driver. Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NXin Hao <xhao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Wei Li 提交于
task #25552995 commit e1d22385ea6686ff3dcd7092d84465c193849829 upstream. When enabling ARM64_PSEUDO_NMI feature in kdump capture kernel, it will report a kernel stack overflow exception: [ 0.000000] CPU features: detected: IRQ priority masking [ 0.000000] alternatives: patching kernel code [ 0.000000] Insufficient stack space to handle exception! [ 0.000000] ESR: 0x96000044 -- DABT (current EL) [ 0.000000] FAR: 0x0000000000000040 [ 0.000000] Task stack: [0xffff0000097f0000..0xffff0000097f4000] [ 0.000000] IRQ stack: [0x0000000000000000..0x0000000000004000] [ 0.000000] Overflow stack: [0xffff80002b7cf290..0xffff80002b7d0290] [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3 [ 0.000000] pstate: 400003c5 (nZcv DAIF -PAN -UAO) [ 0.000000] pc : el1_sync+0x0/0xb8 [ 0.000000] lr : el1_irq+0xb8/0x140 [ 0.000000] sp : 0000000000000040 [ 0.000000] pmr_save: 00000070 [ 0.000000] x29: ffff0000097f3f60 x28: ffff000009806240 [ 0.000000] x27: 0000000080000000 x26: 0000000000004000 [ 0.000000] x25: 0000000000000000 x24: ffff000009329028 [ 0.000000] x23: 0000000040000005 x22: ffff000008095c6c [ 0.000000] x21: ffff0000097f3f70 x20: 0000000000000070 [ 0.000000] x19: ffff0000097f3e30 x18: ffffffffffffffff [ 0.000000] x17: 0000000000000000 x16: 0000000000000000 [ 0.000000] x15: ffff0000097f9708 x14: ffff000089a382ef [ 0.000000] x13: ffff000009a382fd x12: ffff000009824000 [ 0.000000] x11: ffff0000097fb7b0 x10: ffff000008730028 [ 0.000000] x9 : ffff000009440018 x8 : 000000000000000d [ 0.000000] x7 : 6b20676e69686374 x6 : 000000000000003b [ 0.000000] x5 : 0000000000000000 x4 : ffff000008093600 [ 0.000000] x3 : 0000000400000008 x2 : 7db2e689fc2b8e00 [ 0.000000] x1 : 0000000000000000 x0 : ffff0000097f3e30 [ 0.000000] Kernel panic - not syncing: kernel stack overflow [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.34-lw+ #3 [ 0.000000] Call trace: [ 0.000000] dump_backtrace+0x0/0x1b8 [ 0.000000] show_stack+0x24/0x30 [ 0.000000] dump_stack+0xa8/0xcc [ 0.000000] panic+0x134/0x30c [ 0.000000] __stack_chk_fail+0x0/0x28 [ 0.000000] handle_bad_stack+0xfc/0x108 [ 0.000000] __bad_stack+0x90/0x94 [ 0.000000] el1_sync+0x0/0xb8 [ 0.000000] init_gic_priority_masking+0x4c/0x70 [ 0.000000] smp_prepare_boot_cpu+0x60/0x68 [ 0.000000] start_kernel+0x1e8/0x53c [ 0.000000] ---[ end Kernel panic - not syncing: kernel stack overflow ]--- The reason is init_gic_priority_masking() may unmask PSR.I while the irq stacks are not inited yet. Some "NMI" could be raised unfortunately and it will just go into this exception. In this patch, we just write the PMR in smp_prepare_boot_cpu(), and delay unmasking PSR.I after irq stacks inited in init_IRQ(). Fixes: e79321883842 ("arm64: Switch to PMR masking when starting CPUs") Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> [JT: make init_gic_priority_masking() not modify daif, rebase on other priority masking fixes] Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Marc Zyngier 提交于
task #25552995 commit f226650494c6aa87526d12135b7de8b8c074f3de upstream. The GICv3 architecture specification is incredibly misleading when it comes to PMR and the requirement for a DSB. It turns out that this DSB is only required if the CPU interface sends an Upstream Control message to the redistributor in order to update the RD's view of PMR. This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't the case in Linux. It can still be set from EL3, so some special care is required. But the upshot is that in the (hopefuly large) majority of the cases, we can drop the DSB altogether. This relies on a new static key being set if the boot CPU has PMHE set. The drawback is that this static key has to be exported to modules. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Julien Thierry 提交于
task #25552995 commit 677379bc9139ac24b310a281fcb21a2f04288353 upstream. On a system with two security states, if SCR_EL3.FIQ is cleared, non-secure IRQ priorities get shifted to fit the secure view but priority masks aren't. On such system, it turns out that GIC_PRIO_IRQON masks the priority of normal interrupts, which obviously ends up in a hang. Increase GIC_PRIO_IRQON value (i.e. lower priority) to make sure interrupts are not blocked by it. Cc: Oleg Nesterov <oleg@redhat.com> Fixes: bd82d4bd21880b7c ("arm64: Fix incorrect irqflag restore for priority masking") Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJulien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [will: fixed Fixes: tag] Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 James Morse 提交于
task #25552995 commit f46f27a576cc3b1e3d45ea50bc06287aa46b04b2 upstream. Commit bd82d4bd2188 ("arm64: Fix incorrect irqflag restore for priority masking") added a macro to the entry.S call paths that leave the PSTATE.I bit set. This tells the pPNMI masking logic that interrupts are masked by the CPU, not by the PMR. This value is read back by local_daif_save(). Commit bd82d4bd2188 added this call to el0_svc, as el0_svc_handler is called with interrupts masked. el0_svc_compat was missed, but should be covered in the same way as both of these paths end up in el0_svc_common(), which expects to unmask interrupts. Fixes: bd82d4bd2188 ("arm64: Fix incorrect irqflag restore for priority masking") Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Julien Thierry 提交于
Fix #25552995 commit 48ce8f80f5901f1f031b00be66d659d39f33b0a1 upstream. Using IRQ priority masking to enable/disable interrupts is a bit sensitive as it requires to deal with both ICC_PMR_EL1 and PSR.I. Introduce some validity checks to both highlight the states in which functions dealing with IRQ enabling/disabling can (not) be called, and bark a warning when called in an unexpected state. Since these checks are done on hotpaths, introduce a build option to choose whether to do the checking. Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Julien Thierry 提交于
task #25552995 commit bd82d4bd21880b7c4d5f5756be435095d6ae07b5 upstream. When using IRQ priority masking to disable interrupts, in order to deal with the PSR.I state, local_irq_save() would convert the I bit into a PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore() potentially modifying the value of PMR in undesired location due to the state of PSR.I upon flag saving [1]. In an attempt to solve this issue in a less hackish manner, introduce a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent whether PSR.I is being used to disable interrupts, in which case it takes precedence of the status of interrupt masking via PMR. GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> | GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as some sections (e.g. arch_cpu_idle(), interrupt acknowledge path) requires PMR not to mask interrupts that could be signaled to the CPU when using only PSR.I. [1] https://www.spinics.net/lists/arm-kernel/msg716956.html Fixes: 4a503217ce37 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Cc: <stable@vger.kernel.org> # 5.1.x- Reported-by: NZenghui Yu <yuzenghui@huawei.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wei Li <liwei391@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-