- 20 2月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
Until we have hard performance data about the effects of CAD in the spinlock loop disable the instruction by default. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 2月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
For instrumenting global variables KASan will shadow memory backing memory for modules. So on module loading we will need to allocate memory for shadow and map it at address in shadow that corresponds to the address allocated in module_alloc(). __vmalloc_node_range() could be used for this purpose, except it puts a guard hole after allocated area. Guard hole in shadow memory should be a problem because at some future point we might need to have a shadow memory at address occupied by guard hole. So we could fail to allocate shadow for module_alloc(). Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into __vmalloc_node_range(). Add new parameter 'vm_flags' to __vmalloc_node_range() function. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 2月, 2015 1 次提交
-
-
由 Andy Lutomirski 提交于
If an attacker can cause a controlled kernel stack overflow, overwriting the restart block is a very juicy exploit target. This is because the restart_block is held in the same memory allocation as the kernel stack. Moving the restart block to struct task_struct prevents this exploit by making the restart_block harder to locate. Note that there are other fields in thread_info that are also easy targets, at least on some architectures. It's also a decent simplification, since the restart code is more or less identical on all architectures. [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack] Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: David Miller <davem@davemloft.net> Acked-by: NRichard Weinberger <richard@nod.at> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Steven Miao <realmz6@gmail.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 2月, 2015 6 次提交
-
-
由 Heiko Carstens 提交于
Just some minor coding style changes, while I had to look at the code. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When testing Sudeep Holla's cache info rework I didn't realize that the shared cpu masks are broken (all have the same cpu set). Let's fix this. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Reduce the size of struct pcpu, since the pcpu_devices array consists of NR_CPUS elements of type struct pcpu. For most machines this is just a waste of memory. So let's try to make it a bit smaller. This saves 16k with performance_defconfig. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Convert the per cpu topology cpu masks to a per cpu variable. At least for machines which do have less possible cpus than NR_CPUS this can save a bit of memory (z/VM: max 64 vs 512 for performance_defconfig). This reduces the kernel image size by 100k. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
There is no reason to initialize the topology cpu masks already while setup_arch() is being called. It is sufficient to initialize the masks before the scheduler becomes SMP aware. Therefore a pre-SMP initcall aka early_initcall is suffucient. This also allows to convert the cpu_topology array into a per cpu variable with a later patch. Without this patch this wouldn't be possible since the per cpu memory areas are not allocated while setup_arch is executed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Git commit 8d8f2e18a6dbd3d09dd918788422e6ac8c878e96 "s390/vdso: ectg gettime support for CLOCK_THREAD_CPUTIME_ID" broke clock_gettime for CLOCK_THREAD_CPUTIME_ID. Git commit c742b31c "fast vdso implementation for CLOCK_THREAD_CPUTIME_ID" introduced the ECTG for clock id -2. Correct would have been clock id -3. Fix the whole mess, CLOCK_THREAD_CPUTIME_ID is based on CPUCLOCK_SCHED and can not be speed up by the vdso. A speedup is only available for clock id -3 which is CPUCLOCK_VIRT for the task currently running on the CPU. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 2月, 2015 2 次提交
-
-
由 Hendrik Brueckner 提交于
If a task uses vector registers, a save area is allocated to save/restore register states. Free the save area when releasing the task. Found the Memory leak with kmemleak: unreferenced object 0x72885e00 (size 512): comm "vx-test", pid 26123, jiffies 4294945635 (age 256.810s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 01 db 71 06 41 .............q.A 00 00 00 00 00 00 00 00 24 f7 a9 a7 51 94 79 bb ........$...Q.y. backtrace: [<00000000002d1c8a>] kmem_cache_alloc_trace+0x272/0x3d0 [<00000000001014ac>] alloc_vector_registers+0x54/0x138 [<00000000001017c8>] data_exception+0x158/0x1b0 [<00000000008b551e>] pgm_check_handler+0x13e/0x180 [<00000000800008b6>] 0x800008b6 Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
show_cacheinfo() needs to access the cacheinfo structure of any online cpu. This was done with using smp_processor_id() as in index while in preemtible context. This means the cpu could be offline and the data be gone when it would be accessed. Better use any online cpu address and protect the data by get_online_cpus() and put_online_cpus(). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 2月, 2015 1 次提交
-
-
由 Ekaterina Tumanova 提交于
A new architecture extends STSI 3.2.2 with UUID and long names. KVM will provide the first implementation. This patch adds the additional data fields (Extended Name and UUID) from the 4KB block returned by the STSI 3.2.2 command and reflect this information in the /proc/sysinfo file accordingly. Signed-off-by: NEkaterina Tumanova <tumanova@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 29 1月, 2015 4 次提交
-
-
由 Heiko Carstens 提交于
Use a brcl 0,2 instruction for jump label nops during compile time, so we don't mix up the different nops during mcount/hotpatch call site detection. The initial jump label code instruction replacement will exchange these instructions with either a branch or a brcl 0,0 instruction. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Add sanity checks to verify that only expected code will be replaced. If the code patterns do not match print the code patterns and panic, since something went terribly wrong. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Make use of gcc's hotpatch support to generate better code for ftrace function tracing. The generated code now contains only a six byte nop in each function prologue instead of a 24 byte code block which will be runtime patched to support function tracing. With the new code generation the runtime overhead for supporting function tracing is close to zero, while the original code did show a significant performance impact. Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Christian Borntraeger reported that the now missing diag 44 calls (voluntary time slice end) does cause a performance regression for stop_machine() calls if a machine has more virtual cpus than the host has physical cpus. This patch mainly reverts 57f2ffe1 ("s390: remove diag 44 calls from cpu_relax()") with the exception that we still do not issue diag 44 calls if running with smt enabled. Due to group scheduling algorithms when running in LPAR this would lead to significant latencies. However, when running in LPAR we do not have more virtual than physical cpus. Reported-and-tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
Add the compare-and-delay instruction to the spin-lock and rw-lock retry loops. A CPU executing the compare-and-delay instruction stops until the lock value has changed. This is done to make the locking code for contended locks to behave better in regard to the multi- hreading facility. A thread of a core executing a compare-and-delay will allow the other threads of a core to get a larger share of the core resources. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 1月, 2015 4 次提交
-
-
由 Paul Bolle 提交于
Commit 725908110a1f ("s390: add SMT support") added a check for CONFIG_ZFCPDUMP. But the Kconfig symbol ZFCPDUMP was removed in v3.16 through commit bf28a597 ("s390/dump: Remove CONFIG_ZFCPDUMP"). So this check will always evaluate to false. No one noticed probably because the code also checks for CONFIG_CRASH_DUMP which "also enables s390 zfcpdump". Dump the unneeded check. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The multi-threading facility is introduced with the z13 processor family. This patch adds code to detect the multi-threading facility. With the facility enabled each core will surface multiple hardware threads to the system. Each hardware threads looks like a normal CPU to the operating system with all its registers and properties. The SCLP interface reports the SMT topology indirectly via the maximum thread id. Each reported CPU in the result of a read-scp-information is a core representing a number of hardware threads. To reflect the reduced CPU capacity if two hardware threads run on a single core the MT utilization counter set is used to normalize the raw cputime obtained by the CPU timer deltas. This scaled cputime is reported via the taskstats interface. The normal /proc/stat numbers are based on the raw cputime and are not affected by the normalization. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Avoid cache aliasing on z13 by aligning shared objects to multiples of 512K. The virtual addresses of a page from a shared file needs to have identical bits in the range 2^12 to 2^18. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Allow to generate code that only runs on z13 machines. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 1月, 2015 1 次提交
-
-
由 Rusty Russell 提交于
Archs have been abusing module_free() to clean up their arch-specific allocations. Since module_free() is also (ab)used by BPF and trace code, let's keep it to simple allocations, and provide a hook called before that. This means that avr32, ia64, parisc and s390 no longer need to implement their own module_free() at all. avr32 doesn't need module_finalize() either. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-ia64@vger.kernel.org Cc: linux-parisc@vger.kernel.org Cc: linux-s390@vger.kernel.org
-
- 12 1月, 2015 1 次提交
-
-
由 Jan Willeke 提交于
If uprobes are single stepped for example with gdb, the behavior should now be correct. Before this patch, when gdb was single stepping a uprobe, the result was a SIGILL. When PER is active for any storage alteration and a uprobe is hit, a storage alteration event is indicated. These over indications are filterd out by gdb, if no change has happened within the observed area. Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 1月, 2015 5 次提交
-
-
由 Sudeep Holla 提交于
This patch removes the redundant sysfs cacheinfo code by reusing the newly introduced generic cacheinfo infrastructure through the commit 246246cb ("drivers: base: support cpu cache information interface to userspace via sysfs") Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
_sclp_print_early() has a return value, but misses to sign extend it if called from 64 bit code. This is not really a bug, since currently no caller cares what the return value is. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Get rid of sparse warnings like this one: arch/s390/kernel/signal.c:244:1: warning: symbol 'sys_sigreturn' was not declared. Should it be static? Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Remove one of the two identical initializer entries. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Always verify that the to be replaced code matches what we expect to see. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 18 12月, 2014 2 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
in the cpu time accounting function vtime_account_irq_enter (vtime_account_system) we use a WARN_ON_ONCE(!irqs_disabled()). This is redundant as the function virt_timer_forward is always called and has a BUG_ON(!irqs_disabled()). This saves several nanoseconds in my specific testcase (KVM entry/exit) and probably all other callers like (soft)irq entry/exit. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 12月, 2014 6 次提交
-
-
由 Martin Schwidefsky 提交于
To improve the output of the perf tool hide most of the symbols from entry[64].S by using the '.L' prefix. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
On machines with support for vector registers the signal frame includes an area for the vector registers and the ptrace regset interface allow read and write. This is true even if the task never used any vector instruction. Only elf core dumps do not include the vector registers, to make things consistent always include the vector register note in core dumps create on a machine with vector register support. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The copy_thread function fails to reset the p->thread.vxrs pointer. This causes the child to use the same vector register save area, causing both data corruptions and multiple frees of the memory for the save area after the tasks sharing the save area terminate. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Frederic Weisbecker 提交于
s390 uses open coded seqcount to synchronize idle time accounting. Lets consolidate it with the standard API. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
psw_idle() returns with interrupts disabled, so we should add the missing annotation. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
debug_sprintf_event/exception are called even for debug events with a disabling debug level. All other functions already do the check in a wrapper function. Lets do the same here. Due to the var_args the compiler rejects to make this function inline. So let's wrap this via a macro. This patch saves around 80 ns on my z196 for a KVM round trip (we have two debug statements for entry and exit) when KVM is build as a module. The savings for built-in drivers is smaller as we then avoid the PLT overhead for a function call. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 06 12月, 2014 1 次提交
-
-
由 Eric W. Biederman 提交于
Today there are 3 instances of setgroups and due to an oversight their permission checking has diverged. Add a common function so that they may all share the same permission checking code. This corrects the current oversight in the current permission checks and adds a helper to avoid this in the future. A user namespace security fix will update this new helper, shortly. Cc: stable@vger.kernel.org Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 01 12月, 2014 2 次提交
-
-
由 Heiko Carstens 提交于
When we generate the instruction for out of line execution the length of the to be copied instruction was evaluated from a not initialized memory location. Therefore we ended up with a random (2, 4 or 6) number of bytes being copied instead of taking the real instruction length into account. This works surprisingly well most of the time, but still not always. Reported-by: NUrsula Braun <ursula.braun@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Commit eb7e7d76 "s390: Replace __get_cpu_var uses" broke machine check handling. We copy machine check information from per-cpu to a stack variable for local processing. Next we should zap the per-cpu variable, not the stack variable. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 21 11月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Translation exceptions should never happen, since that implies that either we screwed up the page tables or missed to properly flush the TLB. In both cases we should not just simply kill user space or walk the kernel exception tables. Instead an oops or a panic (panic_on_oops) is the better answer. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-