- 15 7月, 2020 1 次提交
-
-
由 Anton Blanchard 提交于
I'm seeing RCU warnings when exiting xmon. xmon resets the NMI watchdog, but does nothing with the RCU stall or soft lockup watchdogs. Add a helper function that handles all three. Signed-off-by: NAnton Blanchard <anton@ozlabs.org> Acked-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200630100218.62a3c3fb@kryten.localdomain
-
- 10 6月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
Patch series "mm: consolidate definitions of page table accessors", v2. The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures. Most of these definitions are actually identical and typically it boils down to, e.g. static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); } static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined. For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic. These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header. This patch (of 12): The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>. The include statements in such cases are remove with a simple loop: for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 6月, 2020 1 次提交
-
-
由 Mike Rapoport 提交于
Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate and replace 5level-fixup.h with pgtable-nop4d.h. [rppt@linux.ibm.com: powerpc/xmon: drop unused pgdir varialble in show_pte() function] Link: http://lkml.kernel.org/r/20200519181454.GI1059226@linux.ibm.com [rppt@linux.ibm.com; build fix] Link: http://lkml.kernel.org/r/20200423141845.GI13521@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> # 8xx and 83xx Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: James Morse <james.morse@arm.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200414153455.21744-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 6月, 2020 1 次提交
-
-
由 Michael Ellerman 提交于
Show the address of the tasks regs in the process listing in xmon. The regs should always be on the stack page that we also print the address of, but it's still helpful not to have to find them by hand. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200520111740.953679-1-mpe@ellerman.id.au
-
- 26 5月, 2020 1 次提交
-
-
由 Michael Ellerman 提交于
In a few places we want to calculate the address of the next instruction. Previously that was simple, we just added 4 bytes, or if using a u32 * we incremented that pointer by 1. But prefixed instructions make it more complicated, we need to advance by either 4 or 8 bytes depending on the actual instruction. We also can't do pointer arithmetic using struct ppc_inst, because it is always 8 bytes in size on 64-bit, even though we might only need to advance by 4 bytes. So add a ppc_inst_next() helper which calculates the location of the next instruction, if the given instruction was located at the given address. Note the instruction doesn't need to actually be at the address in memory. Although it would seem natural for the value to be passed by value, that makes it too easy to write a loop that will read off the end of a page, eg: for (; src < end; src = ppc_inst_next(src, *src), dest = ppc_inst_next(dest, *dest)) As noticed by Christophe and Jordan, if end is the exact end of a page, and the next page is not mapped, this will fault, because *dest will read 8 bytes, 4 bytes into the next page. So value is passed by reference, so the helper can be careful to use ppc_inst_read() on it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NJordan Niethe <jniethe5@gmail.com> Link: https://lore.kernel.org/r/20200522133318.1681406-1-mpe@ellerman.id.au
-
- 18 5月, 2020 17 次提交
-
-
由 Ravi Bangoria 提交于
Add support for 2nd DAWR in xmon. With this, we can have two simultaneous breakpoints from xmon. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NMichael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-17-ravi.bangoria@linux.ibm.com
-
由 Ravi Bangoria 提交于
Xmon allows overwriting breakpoints because it's supported by only one DAWR. But with multiple DAWRs, overwriting becomes ambiguous or unnecessary complicated. So let's not allow it. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NMichael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-16-ravi.bangoria@linux.ibm.com
-
由 Ravi Bangoria 提交于
Introduce new parameter 'nr' to __set_breakpoint() which indicates which DAWR should be programed. Also convert current_brk variable to an array. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NMichael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-7-ravi.bangoria@linux.ibm.com
-
由 Ravi Bangoria 提交于
Power10 is introducing second DAWR. Use real register names from ISA for current macros: s/SPRN_DAWR/SPRN_DAWR0/ s/SPRN_DAWRX/SPRN_DAWRX0/ Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NMichael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-2-ravi.bangoria@linux.ibm.com
-
由 Jordan Niethe 提交于
Do not allow placing xmon breakpoints on the suffix of a prefix instruction. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Don't split printf strings across lines] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-27-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
For powerpc64, redefine the ppc_inst type so both word and prefixed instructions can be represented. On powerpc32 the type will remain the same. Update places which had assumed instructions to be 4 bytes long. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Reviewed-by: NAlistair Popple <alistair@popple.id.au> [mpe: Rework the get_user_inst() macros to be parameterised, and don't assign to the dest if an error occurred. Use CONFIG_PPC64 not __powerpc64__ in a few places. Address other comments from Christophe. Fix some sparse complaints.] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-24-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
When a new breakpoint is created, the second instruction of that breakpoint is patched with a trap instruction. This assumes the length of the instruction is always the same. In preparation for prefixed instructions, remove this assumption. Insert the trap instruction at the same time the first instruction is inserted. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-20-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
Currently in xmon, mread() is used for reading instructions. In preparation for prefixed instructions, create and use a new function, mread_instr(), especially for reading instructions. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-19-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
Prefixed instructions will mean there are instructions of different length. As a result dereferencing a pointer to an instruction will not necessarily give the desired result. Introduce a function for reading instructions from memory into the instruction data type. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-13-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
Currently unsigned ints are used to represent instructions on powerpc. This has worked well as instructions have always been 4 byte words. However, ISA v3.1 introduces some changes to instructions that mean this scheme will no longer work as well. This change is Prefixed Instructions. A prefixed instruction is made up of a word prefix followed by a word suffix to make an 8 byte double word instruction. No matter the endianness of the system the prefix always comes first. Prefixed instructions are only planned for powerpc64. Introduce a ppc_inst type to represent both prefixed and word instructions on powerpc64 while keeping it possible to exclusively have word instructions on powerpc32. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Fix compile error in emulate_spe()] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-12-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for an instruction data type that can not be directly used with the '==' operator use functions for checking equality. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NBalamuruhan S <bala24@linux.ibm.com> Link: https://lore.kernel.org/r/20200506034050.24806-11-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for introducing a more complicated instruction type to accommodate prefixed instructions use an accessor for getting an instruction as a u32. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-8-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
In preparation for instructions having a more complex data type start using a macro, ppc_inst(), for making an instruction out of a u32. A macro is used so that instructions can be used as initializer elements. Currently this does nothing, but it will allow for creating a data type that can represent prefixed instructions. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> [mpe: Change include guard to _ASM_POWERPC_INST_H] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-7-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
A modulo operation is used for calculating the current offset from a breakpoint within the breakpoint table. As instruction lengths are always a power of 2, this can be replaced with a bitwise 'and'. The current check for word alignment can be replaced with checking that the lower 2 bits are not set. Suggested-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-5-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
The instructions for xmon's breakpoint are stored bpt_table[] which is in the data section. This is problematic as the data section may be marked as no execute. Move bpt_table[] to the text section. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-4-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
To execute an instruction out of line after a breakpoint, the NIP is set to the address of struct bpt::instr. Here a copy of the instruction that was replaced with a breakpoint is kept, along with a trap so normal flow can be resumed after XOLing. The struct bpt's are located within the data section. This is problematic as the data section may be marked as no execute. Instead of each struct bpt holding the instructions to be XOL'd, make a new array, bpt_table[], with enough space to hold instructions for the number of supported breakpoints. A later patch will move this to the text section. Make struct bpt::instr a pointer to the instructions in bpt_table[] associated with that breakpoint. This association is a simple mapping: bpts[n] -> bpt_table[n * words per breakpoint]. Currently we only need the copied instruction followed by a trap, so 2 words per breakpoint. Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAlistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-3-jniethe5@gmail.com
-
由 Jordan Niethe 提交于
For modifying instructions in xmon, patch_instruction() can serve the same role that store_inst() is performing with the advantage of not being specific to xmon. In some places patch_instruction() is already being using followed by store_inst(). In these cases just remove the store_inst(). Otherwise replace store_inst() with patch_instruction(). Signed-off-by: NJordan Niethe <jniethe5@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Link: https://lore.kernel.org/r/20200506034050.24806-2-jniethe5@gmail.com
-
- 15 5月, 2020 3 次提交
-
-
由 Emil Velikov 提交于
With earlier commits, the API no longer discards the const-ness of the sysrq_key_op. As such we can add the notation. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jslaby@suse.com> Cc: linux-kernel@vger.kernel.org Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@lists.ozlabs.org Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NEmil Velikov <emil.l.velikov@gmail.com> Link: https://lore.kernel.org/r/20200513214351.2138580-6-emil.l.velikov@gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Nicholas Piggin 提交于
A new system call interrupt will be added with a new trap number. Hide the explicit 0xc00 test behind an accessor to reduce churn in callers. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> [mpe: Make it a static inline] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200507121332.2233629-3-mpe@ellerman.id.au
-
由 Nicholas Piggin 提交于
The pt_regs.trap field keeps 4 low bits for some metadata about the trap or how it was handled, which is masked off in order to test the architectural trap number. Add a set_trap() accessor to set this, equivalent to TRAP() for returning it. This is actually not quite the equivalent of TRAP() because it always clears the low bits, which may be harmless if it can only be updated via ptrace syscall, but it seems dangerous. In fact settting TRAP from ptrace doesn't seem like a great idea so maybe it's better deleted. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> [mpe: Make it a static inline rather than a shouty macro] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200507121332.2233629-2-mpe@ellerman.id.au
-
- 26 3月, 2020 1 次提交
-
-
由 Douglas Miller 提交于
The reason debuggers add an ASCII dump to other types of memory dumps is to give the user visual reference points in the case that ASCII strings are adjacent to other structures or element. For example, when examining the task_struct structure one can look for the comm[] string and use it to locate other important elements. ASCII strings do not have endianess, they exist in memory in the same order regardless of CPU endianess. ASCII strings are, by definition, human readable and so should be presented in a human readable format. For these reasons, the supplemental ASCII dump does not re-order the strings from memory to match the endianess of the corresponding 16, 32, or 64 bit words. That would make the ASCII dump much less useful. Signed-off-by: NDouglas Miller <dougmill@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1488205694-13337-1-git-send-email-dougmill@linux.vnet.ibm.com
-
- 25 3月, 2020 1 次提交
-
-
由 Michael Ellerman 提交于
In xmon we have two variables that are used by the dump commands. There's ndump which is the number of bytes to dump using 'd', and nidump which is the number of instructions to dump using 'di'. ndump starts as 64 and nidump starts as 16, but both can be set by the user. It's fairly common to be pasting addresses into xmon when trying to debug something, and if you inadvertently double paste an address like so: 0:mon> di c000000002101f6c c000000002101f6c The second value is interpreted as the number of instructions to dump. Luckily it doesn't dump 13 quintrillion instructions, the value is limited to MAX_DUMP (128K). But as each instruction is dumped on a single line, that's still a lot of output. If you're on a slow console that can take multiple minutes to print. If you were "just popping in and out of xmon quickly before the RCU/hardlockup detector fires" you are now having a bad day. Things are not as bad with 'd' because we print 16 bytes per line, so it's fewer lines. But it's still quite a lot. So shrink the maximum for 'd' to 64K (one page), which is 4096 lines. For 'di' add a new limit which is the above / 4 - because instructions are 4 bytes, meaning again we can dump one page. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200219110007.31195-1-mpe@ellerman.id.au
-
- 18 2月, 2020 1 次提交
-
-
由 Oliver O'Halloran 提交于
The ls (lookup symbol) and zr (reboot) commands use xmon's getstring() helper to read a string argument from the xmon prompt. This function skips over leading whitespace, but doesn't check if the first "non-whitespace" character is a newline which causes some odd behaviour (<enter> indicates a the enter key was pressed): 0:mon> ls printk<enter> printk: c0000000001680c4 0:mon> ls<enter> printk<enter> Symbol ' printk' not found. 0:mon> With commit 2d9b332d ("powerpc/xmon: Allow passing an argument to ppc_md.restart()") we have a similar problem with the zr command. Previously zr took no arguments so "zr<enter> would trigger a reboot. With that patch applied a second newline needs to be sent in order for the reboot to occur. Fix this by checking if the leading whitespace ended on a newline: 0:mon> ls<enter> Symbol '' not found. Fixes: 2d9b332d ("powerpc/xmon: Allow passing an argument to ppc_md.restart()") Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200217041343.2454-1-oohall@gmail.com
-
- 23 1月, 2020 1 次提交
-
-
由 Oliver O'Halloran 提交于
On PowerNV a few different kinds of reboot are supported. We'd like to be able to exercise these from xmon so allow 'zr' to take an argument, and pass that to the ppc_md.restart() function. Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191101085522.3055-1-oohall@gmail.com
-
- 14 1月, 2020 1 次提交
-
-
由 Sukadev Bhattiprolu 提交于
ASDR is HV-privileged and must only be accessed in HV-mode. Fixes a Program Check (0x700) when xmon in a VM dumps SPRs. Fixes: d1e1b351 ("powerpc/xmon: Add ISA v3.0 SPRs to SPR dump") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: NSukadev Bhattiprolu <sukadev@linux.ibm.com> Reviewed-by: NAndrew Donnellan <ajd@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200107021633.GB29843@us.ibm.com
-
- 13 11月, 2019 1 次提交
-
-
由 Ravi Bangoria 提交于
We are hadrcoding length everywhere in the watchpoint code. Introduce macros for the length and use them. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191017093204.7511-2-ravi.bangoria@linux.ibm.com
-
- 28 10月, 2019 2 次提交
-
-
由 Christopher M. Riedl 提交于
Xmon should be either fully or partially disabled depending on the kernel lockdown state. Put xmon into read-only mode for lockdown=integrity and prevent user entry into xmon when lockdown=confidentiality. Xmon checks the lockdown state on every attempted entry: (1) during early xmon'ing (2) when triggered via sysrq (3) when toggled via debugfs (4) when triggered via a previously enabled breakpoint The following lockdown state transitions are handled: (1) lockdown=none -> lockdown=integrity set xmon read-only mode (2) lockdown=none -> lockdown=confidentiality clear all breakpoints, set xmon read-only mode, prevent user re-entry into xmon (3) lockdown=integrity -> lockdown=confidentiality clear all breakpoints, set xmon read-only mode, prevent user re-entry into xmon Suggested-by: NAndrew Donnellan <ajd@linux.ibm.com> Signed-off-by: NChristopher M. Riedl <cmr@informatik.wtf> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190907061124.1947-3-cmr@informatik.wtf
-
由 Christopher M. Riedl 提交于
Read-only mode should not prevent listing and clearing any active breakpoints. Tested-by: NDaniel Axtens <dja@axtens.net> Reviewed-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NChristopher M. Riedl <cmr@informatik.wtf> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190907061124.1947-2-cmr@informatik.wtf
-
- 13 9月, 2019 1 次提交
-
-
由 Cédric Le Goater 提交于
When looping on the list of interrupts, add the current value of the PQ bits with a load on the ESB page. This has the side effect of faulting the ESB page of all interrupts. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190910081850.26038-2-clg@kaod.org
-
- 19 8月, 2019 3 次提交
-
-
由 Cédric Le Goater 提交于
Modify the xmon 'dxi' command to query all interrupts if no IRQ number is specified. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190814154754.23682-4-clg@kaod.org
-
由 Cédric Le Goater 提交于
The xmon 'dxi' command calls OPAL to query the XIVE configuration of a interrupt. This can only be done on baremetal (PowerNV) and it will crash a pseries machine. Introduce a new XIVE get_irq_config() operation which implements a different query depending on the platform, PowerNV or pseries, and modify xmon to use a top level wrapper. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190814154754.23682-3-clg@kaod.org
-
由 Cédric Le Goater 提交于
Currently, the xmon 'dx' command calls OPAL to dump the XIVE state in the OPAL logs and also outputs some of the fields of the internal XIVE structures in Linux. The OPAL calls can only be done on baremetal (PowerNV) and they crash a pseries machine. Fix by checking the hypervisor feature of the CPU. Signed-off-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190814154754.23682-2-clg@kaod.org
-
- 04 7月, 2019 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
Even when we have HugeTLB and THP disabled, kernel linear map can still be mapped with hugepages. This is only an issue with radix translation because hash MMU doesn't map kernel linear range in linux page table and other kernel map areas are not mapped using hugepage. Add config independent helpers and put WARN_ON() when we don't expect things to be mapped via hugepages. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 7月, 2019 1 次提交
-
-
由 Nicholas Piggin 提交于
The bad stack test in interrupt handlers has a few problems. For performance it is taken in the common case, which is a fetch bubble and a waste of i-cache. For code development and maintainence, it requires yet another stack frame setup routine, and that constrains all exception handlers to follow the same register save pattern which inhibits future optimisation. Remove the test/branch and replace it with a trap. Teach the program check handler to use the emergency stack for this case. This does not result in quite so nice a message, however the SRR0 and SRR1 of the crashed interrupt can be seen in r11 and r12, as is the original r1 (adjusted by INT_FRAME_SIZE). These are the most important parts to debugging the issue. The original r9-12 and cr0 is lost, which is the main downside. kernel BUG at linux/arch/powerpc/kernel/exceptions-64s.S:847! Oops: Exception in kernel mode, sig: 5 [#1] BE SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted NIP: c000000000009108 LR: c000000000cadbcc CTR: c0000000000090f0 REGS: c0000000fffcbd70 TRAP: 0700 Not tainted MSR: 9000000000021032 <SF,HV,ME,IR,DR,RI> CR: 28222448 XER: 20040000 CFAR: c000000000009100 IRQMASK: 0 GPR00: 000000000000003d fffffffffffffd00 c0000000018cfb00 c0000000f02b3166 GPR04: fffffffffffffffd 0000000000000007 fffffffffffffffb 0000000000000030 GPR08: 0000000000000037 0000000028222448 0000000000000000 c000000000ca8de0 GPR12: 9000000002009032 c000000001ae0000 c000000000010a00 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: c0000000f00322c0 c000000000f85200 0000000000000004 ffffffffffffffff GPR24: fffffffffffffffe 0000000000000000 0000000000000000 000000000000000a GPR28: 0000000000000000 0000000000000000 c0000000f02b391c c0000000f02b3167 NIP [c000000000009108] decrementer_common+0x18/0x160 LR [c000000000cadbcc] .vsnprintf+0x3ec/0x4f0 Call Trace: Instruction dump: 996d098a 994d098b 38610070 480246ed 48005518 60000000 38200000 718a4000 7c2a0b78 3821fd00 41c20008 e82d0970 <0981fd00> f92101a0 f9610170 f9810178 Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 7月, 2019 1 次提交
-
-
由 Naveen N. Rao 提交于
Commit ed49f7fd ("powerpc/xmon: Disable tracing when entering xmon") added code to disable recording trace entries while in xmon. The commit introduced a variable 'tracing_enabled' to record if tracing was enabled on xmon entry, and used this to conditionally enable tracing during exit from xmon. However, we are not checking the value of 'fromipi' variable in xmon_core() when setting 'tracing_enabled'. Due to this, when secondary cpus enter xmon, they will see tracing as being disabled already and tracing won't be re-enabled on exit. Fix the same. Fixes: ed49f7fd ("powerpc/xmon: Disable tracing when entering xmon") Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-