- 10 8月, 2017 16 次提交
-
-
由 Christophe Leroy 提交于
Two config options exist to define powerpc MPC8xx: * CONFIG_PPC_8xx * CONFIG_8xx arch/powerpc/platforms/Kconfig.cputype has contained the following comment about CONFIG_8xx item for some years: "# this is temp to handle compat with arch=ppc" arch/powerpc is now the only place with remaining use of CONFIG_8xx: get rid of them. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
The 8xx cannot access the TBL and TBU registers using mfspr/mtspr It must be accessed using mftb/mftbu Due to this, there is a number of places with #ifdef CONFIG_8xx This patch defines new macros MFTBL(x) and MFTBU(x) on the same model as MFTB(x) and tries to make use of them as much as possible. In arch/powerpc/include/asm/timex.h, we also remove the ifdef for the asm() operands as the compiler doesn't mind unused operands Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
Since commit aa42c69c ("[POWERPC] Add support for FP emulation for the e300c2 core"), program_check_exception() can be called for math emulation. In that case, 'reason' is 0. On the 8xx, there is a Software Emulation interrupt which is called for all unimplemented and illegal instructions. This interrupt calls SoftwareEmulation() which does almost the same as program_check_exception() called with reason = 0. The Software Emulation interrupt sets all reason bits to 0, it is therefore possible to call program_check_exception() directly from the interrupt handler. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
In the same spirit as what was done for 4xx and 44x, move the 8xx machine check into platforms/8xx Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Currently we open code the reason codes for program checks. Instead use the existing SRR1 defines. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
We already have mce.c which is built for 64bit and contains other parts of the machine check code, so move these bits in there too. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Make it clear that the fallback version of machine_check_generic() is only used on 32-bit configs. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
get_mc_reason() no longer provides (if it ever really did) any meaningful abstraction, so remove it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Now that we have 4xx platform directory we can move the 4xx machine check handler in there. Again we drop get_mc_reason() and replace it with regs->dsisr directly (which is actually SPRN_ESR). Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
We have several 44x machine check handlers defined in traps.c. It would be preferable if they were split out with the platforms that use them. Do that. In the process, drop get_mc_reason() and instead just open code the lookup of reason from regs->dsisr. This avoids a pointless layer of abstraction. We know to use regs->dsisr because 44x enables BOOKE which enables PPC_ADV_DEBUG_REGS, and FSL_BOOKE is not enabled on 44x builds. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Currently we build the 47x cputable entries even when CONFIG_PPC_47x is disabled. That means a kernel built without CONFIG_PPC_47x will claim to support a 47x CPU and start booting, only to break somewhere later because it doesn't have 47x support compiled in. So guard the 47x cputable entries with CONFIG_PPC_47x. Note that this is inside the #ifdef CONFIG_44x section, because 47x depends on 44x. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
This adds an irq counter for the watchdog soft-NMI. This interrupt only fires when interrupts are soft-disabled, so it will not increment much even when the watchdog is running. However it's useful for debugging and sanity checking. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
The powerpc kernel/watchdog.o should be built when HARDLOCKUP_DETECTOR and HAVE_HARDLOCKUP_DETECTOR_ARCH are both selected. If only the former is selected, then the generic perf watchdog has been selected. To simplify this check, introduce a new Kconfig symbol PPC_WATCHDOG that depends on both. This Kconfig option means the powerpc specific watchdog is enabled. Without this patch, Book3E will attempt to build the powerpc watchdog. Fixes: 2104180a ("powerpc/64s: implement arch-specific hardlockup watchdog") Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
On 64-bit Book3s, when we're in HV mode, we have already counted the machine check exception in machine_check_early(). Signed-off-by: NNicholas Piggin <npiggin@gmail.com> [mpe: Use IS_ENABLED() rather than an #ifdef] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andreas Schwab 提交于
binutils >= 2.26 now warns about misuse of register expressions in assembler operands that are actually literals, for example: arch/powerpc/kernel/entry_64.S:535: Warning: invalid register expression In practice these are almost all uses of r0 that should just be a literal 0. Signed-off-by: NAndreas Schwab <schwab@linux-m68k.org> [mpe: Mention r0 is almost always the culprit, fold in purgatory change] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 08 8月, 2017 1 次提交
-
-
由 Christophe Leroy 提交于
Commit d300627c ("powerpc/6xx: Handle DABR match before calling do_page_fault") breaks non 6xx platforms. Failed to execute /init (error -14) Starting init: /bin/sh exists but couldn't execute it (error -14) Kernel panic - not syncing: No working init found. Try passing init= ... CPU: 0 PID: 1 Comm: init Not tainted 4.13.0-rc3-s3k-dev-00143-g7aa62e972a56 #56 Call Trace: panic+0x108/0x250 (unreliable) rootfs_mount+0x0/0x58 ret_from_kernel_thread+0x5c/0x64 Rebooting in 180 seconds.. This is because in handle_page_fault(), the call to do_page_fault() has been mistakenly enclosed inside an #ifdef CONFIG_6xx Fixes: d300627c ("powerpc/6xx: Handle DABR match before calling do_page_fault") Brown-paper-bag-to-be-worn-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 03 8月, 2017 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This uses the newly defined constants for this rather than open-coded numbers. There is a side effect on 64-bit which is to pass through some of the new P9 bits which we didn't before. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
We test a number of bits from DSISR/SRR1 before deciding to call hash_page(). If any of these is set, we go directly to do_page_fault() as the bit indicate a fault that needs to be handled there (no hashing needed). This updates the current open-coded masks to use the new DSISR definitions. This *does* change the masks actually used in two ways: - We used to test various bits that were defined as "always 0" in the architecture and could be repurposed for something else. From now on, we just ignore such bits. - We were missing some new bits defined on P9 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
On legacy 6xx 32-bit procesors, we checked for the DABR match bit in DSISR from do_page_fault(), in the middle of a pile of ifdef's because all other CPU types do it in assembly prior to calling do_page_fault. Fix that. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Add #ifdef CONFIG_6xx] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 8月, 2017 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
By filtering the relevant SRR1 bits in the assembly rather than in do_page_fault() itself, we avoid a conditional branch (since we already come from different path for data and instruction faults). This will allow more simplifications later Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 8月, 2017 2 次提交
-
-
由 Victor Aoqui 提交于
Replace the __this_cpu_read() with raw_cpu_read() in iommu_range_alloc(). Otherwise we get a warning about using __this_cpu_read() in preemptible code: BUG: using __this_cpu_read() in preemptible caller is iommu_range_alloc+0xa8/0x3d0 Preemption doesn't need to be disabled since according to the comment any CPU can safely use any IOMMU pool. Signed-off-by: NVictor Aoqui <victora@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gautham R. Shenoy 提交于
The stop4 idle state on POWER9 is a deep idle state which loses hypervisor resources, but whose latency is low enough that it can be exposed via cpuidle. Until now, the deep idle states which lose hypervisor resources (eg: winkle) were only exposed via CPU-Hotplug. Hence currently on wakeup from such states, barring a few SPRs which need to be restored to their older value, rest of the SPRS are reinitialized to their values corresponding to that at boot time. When stop4 is used in the context of cpuidle, we want these additional SPRs to be restored to their older value, to ensure that the context on the CPU coming back from idle is same as it was before going idle. In this patch, we define a SPR save area in PACA (since we have used up the volatile register space in the stack) and on POWER9, we restore SPRN_PID, SPRN_LDBAR, SPRN_FSCR, SPRN_HFSCR, SPRN_MMCRA, SPRN_MMCR1, SPRN_MMCR2 to the values they had before entering stop. Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 7月, 2017 2 次提交
-
-
由 Nicholas Piggin 提交于
A previous optimisation incorrectly assumed the PAPR hcall does not use r12, and clobbers it upon entry. In fact it is used as an input. This can result in KVM guests crashing (observed with PR KVM). Instead of using r12 to save r13, tihs patch saves r13 in ctr. This is more costly, but not as slow as using the SPRG. Fixes: acd7d8ce ("powerpc/64s: Optimize hypercall/syscall entry") Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
POWER9 DD2 can see spurious PMU interrupts after state-loss idle in some conditions. A solution is to save and reload MMCR0 over state-loss idle. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Tested-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 13 7月, 2017 4 次提交
-
-
由 Daniel Axtens 提交于
prom_init is a bit special; in theory it should be able to be linked separately to the kernel. To keep this from getting too complex, the symbols that prom_init.c uses are checked. Fortification adds symbols, and it gets quite messy as it includes things like panic(). So just don't fortify prom_init.c for now. Link: http://lkml.kernel.org/r/1497903987-21002-6-git-send-email-keescook@chromium.orgSigned-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: Daniel Micay <danielmicay@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicholas Piggin 提交于
Implement an arch-speicfic watchdog rather than use the perf-based hardlockup detector. The new watchdog takes the soft-NMI directly, rather than going through perf. Perf interrupts are to be made maskable in future, so that would prevent the perf detector from working in those regions. Additionally, implement a SMP based detector where all CPUs watch one another by pinging a shared cpumask. This is because powerpc Book3S does not have a true periodic local NMI, but some platforms do implement a true NMI IPI. If a CPU is stuck with interrupts hard disabled, the soft-NMI watchdog does not work, but the SMP watchdog will. Even on platforms without a true NMI IPI to get a good trace from the stuck CPU, other CPUs will notice the lockup sufficiently to report it and panic. [npiggin@gmail.com: honor watchdog disable at boot/hotplug] Link: http://lkml.kernel.org/r/20170621001346.5bb337c9@roar.ozlabs.ibm.com [npiggin@gmail.com: fix false positive warning at CPU unplug] Link: http://lkml.kernel.org/r/20170630080740.20766-1-npiggin@gmail.com [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/20170616065715.18390-6-npiggin@gmail.comSigned-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NDon Zickus <dzickus@redhat.com> Tested-by: Babu Moger <babu.moger@oracle.com> [sparc] Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicholas Piggin 提交于
Split SOFTLOCKUP_DETECTOR from LOCKUP_DETECTOR, and split HARDLOCKUP_DETECTOR_PERF from HARDLOCKUP_DETECTOR. LOCKUP_DETECTOR implies the general boot, sysctl, and programming interfaces for the lockup detectors. An architecture that wants to use a hard lockup detector must define HAVE_HARDLOCKUP_DETECTOR_PERF or HAVE_HARDLOCKUP_DETECTOR_ARCH. Alternatively an arch can define HAVE_NMI_WATCHDOG, which provides the minimum arch_touch_nmi_watchdog, and it otherwise does its own thing and does not implement the LOCKUP_DETECTOR interfaces. sparc is unusual in that it has started to implement some of the interfaces, but not fully yet. It should probably be converted to a full HAVE_HARDLOCKUP_DETECTOR_ARCH. [npiggin@gmail.com: fix] Link: http://lkml.kernel.org/r/20170617223522.66c0ad88@roar.ozlabs.ibm.com Link: http://lkml.kernel.org/r/20170616065715.18390-4-npiggin@gmail.comSigned-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NDon Zickus <dzickus@redhat.com> Reviewed-by: NBabu Moger <babu.moger@oracle.com> Tested-by: Babu Moger <babu.moger@oracle.com> [sparc] Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Xunlei Pang 提交于
vmcoreinfo_max_size stands for the vmcoreinfo_data, the correct one we should use is vmcoreinfo_note whose total size is VMCOREINFO_NOTE_SIZE. Like explained in commit 77019967 ("kdump: fix exported size of vmcoreinfo note"), it should not affect the actual function, but we better fix it, also this change should be safe and backward compatible. After this, we can get rid of variable vmcoreinfo_max_size, let's use the corresponding macros directly, fewer variables means more safety for vmcoreinfo operation. [xlpang@redhat.com: fix build warning] Link: http://lkml.kernel.org/r/1494830606-27736-1-git-send-email-xlpang@redhat.com Link: http://lkml.kernel.org/r/1493281021-20737-2-git-send-email-xlpang@redhat.comSigned-off-by: NXunlei Pang <xlpang@redhat.com> Reviewed-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Reviewed-by: NDave Young <dyoung@redhat.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Juergen Gross <jgross@suse.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2017 1 次提交
-
-
由 Nicholas Piggin 提交于
There are two cases outside the normal address space management where a CPU's local TLB is to be flushed: 1. Host boot; in case something has left stale entries in the TLB (e.g., kexec). 2. Machine check; to clean corrupted TLB entries. CPU state restore from deep idle states also flushes the TLB. However this seems to be a side effect of reusing the boot code to set CPU state, rather than a requirement itself. The current flushing has a number of problems with ISA v3.0B: - The current radix mode of the MMU is not taken into account. tlbiel is undefined if the R field does not match the current radix mode. - ISA v3.0B hash must flush the partition and process table caches. - ISA v3.0B radix must flush partition and process scoped translations, partition and process table caches, and also the page walk cache. Add POWER9 cases to handle these, with radix vs hash determined by the host MMU mode. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 7月, 2017 1 次提交
-
-
由 Balbir Singh 提交于
This patch fixes a crash seen while doing a kexec from radix mode to hash mode. Key 0 is special in hash and used in the RPN by default, we set the key values to 0 today. In radix mode key 0 is used to control supervisor<->user access. In hash key 0 is used by default, so the first instruction after the switch causes a crash on kexec. Commit 3b10d009 ("powerpc/mm/radix: Prevent kernel execution of user space") introduced the setting of IAMR and AMOR values to prevent execution of user mode instructions from supervisor mode. We need to clean up these SPR's on kexec. Fixes: 3b10d009 ("powerpc/mm/radix: Prevent kernel execution of user space") Cc: stable@vger.kernel.org # v4.10+ Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 08 7月, 2017 1 次提交
-
-
由 Naveen N. Rao 提交于
Rename function_offset_within_entry() to scope it to kprobe namespace by using kprobe_ prefix, and to also simplify it. Suggested-by: NIngo Molnar <mingo@kernel.org> Suggested-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/3aa6c7e2e4fb6e00f3c24fa306496a66edb558ea.1499443367.git.naveen.n.rao@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 7月, 2017 8 次提交
-
-
由 Balbir Singh 提交于
For CONFIG_STRICT_KERNEL_RWX align __init_begin to 16M. We use 16M since its the larger of 2M on radix and 16M on hash for our linear mapping. The plan is to have .text, .rodata and everything upto __init_begin marked as RX. Note we still have executable read only data. We could further align rodata to another 16M boundary. I've used keeping text plus rodata as read-only-executable as a trade-off to doing read-only-executable for text and read-only for rodata. We don't use multi PT_LOAD in PHDRS because we are not sure if all bootloaders support them. This patch keeps PHDRS in vmlinux.lds.S as the same they are with just one PT_LOAD for all of the kernel marked as RWX (7). mpe: What this means is the added alignment bloats the resulting binary on disk, a powernv kernel goes from 17M to 22M. Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Balbir Singh 提交于
So that we can implement STRICT_RWX, use patch_instruction() in optprobes. Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Balbir Singh 提交于
arch_arm/disarm_probe() use direct assignment for copying instructions, replace them with patch_instruction(). We don't need to call flush_icache_range() because patch_instruction() does it for us. Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
We can't take traps with relocation off, so blacklist enter_rtas() and rtas_return_loc(). However, instead of blacklisting all of enter_rtas(), introduce a new symbol __enter_rtas from where on we can't take a trap and blacklist that. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
Blacklist all functions involved while handling a trap. We: - convert some of the symbols into private symbols, and - blacklist most functions involved while handling a trap. Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
It is actually safe to probe system_call() in entry_64.S, but only till we unset MSR_RI. To allow this, add a new symbol system_call_exit() after the mtmsrd and blacklist that. Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
It is common to get a PMU interrupt right after the mtmsr instruction that enables interrupts. Due to this, the stack trace profile gets needlessly split across system_call_common() and system_call(). Previously, system_call() symbol was at the current place to hide a few earlier symbols which have since been made private or removed entirely. So, let's move system_call() slightly higher up, right after the mtmsr instruction that enables interrupts. Convert existing references to system_call to a local syscall symbol. Suggested-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
Convert some of the symbols into private symbols and blacklist system_call_common() and system_call() from kprobes. We can't take a trap at parts of these functions as either MSR_RI is unset or the kernel stack pointer is not yet setup. Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> [mpe: Don't convert system_call_common to _GLOBAL()] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-