- 31 7月, 2016 4 次提交
-
-
由 Heiko Carstens 提交于
If the kernel needs more facilities to run than the machine provides it is running on, print the facility bit numbers which are missing. This allows to easily tell what went wrong and if simply the machine does not provide a required facility or if either the kernel or the hypervisor may have a bug. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NSascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
If we have a facility mismatch the kernel only emits a warning that the processor is not recent enough and stops operating. This doesn't give us a lot of an idea of what actually went wrong. As a first step print the machine type in addition. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NSascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
There is no reason to have this code in assembly language. Therefore convert it to C. Note that this code needs special treatment: it is called very early and one of the side effects is that e.g. the bss section is not cleared. Therefore the preferred way for static variables is to put them on the stack which has a size of 16KB. There is no functional change with this patch. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NSascha Silbe <silbe@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The early sclp code may be called before the bss section is cleared. Therefore move all variables to the data section. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 18 7月, 2016 1 次提交
-
-
由 Dan Carpenter 提交于
I can never remember precedence rules. Let's add some parenthesis so this code is more clear. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 16 7月, 2016 1 次提交
-
-
由 Daniel Borkmann 提交于
This patch adds support for non-linear data on raw records. It extends raw records to have one or multiple fragments that will be written linearly into the ring slot, where each fragment can optionally have a custom callback handler to walk and extract complex, possibly non-linear data. If a callback handler is provided for a fragment, then the new __output_custom() will be used instead of __output_copy() for the perf_output_sample() part. perf_prepare_sample() does all the size calculation only once, so perf_output_sample() doesn't need to redo the same work anymore, meaning real_size and padding will be cached in the raw record. The raw record becomes 32 bytes in size without holes; to not increase it further and to avoid doing unnecessary recalculations in fast-path, we can reuse next pointer of the last fragment, idea here is borrowed from ZERO_OR_NULL_PTR(), which should keep the perf_output_sample() path for PERF_SAMPLE_RAW minimal. This facility is needed for BPF's event output helper as a first user that will, in a follow-up, add an additional perf_raw_frag to its perf_raw_record in order to be able to more efficiently dump skb context after a linear head meta data related to it. skbs can be non-linear and thus need a custom output function to dump buffers. Currently, the skb data needs to be copied twice; with the help of __output_custom() this work only needs to be done once. Future users could be things like XDP/BPF programs that work on different context though and would thus also have a different callback function. The few users of raw records are adapted to initialize their frag data from the raw record itself, no change in behavior for them. The code is based upon a PoC diff provided by Peter Zijlstra [1]. [1] http://thread.gmane.org/gmane.linux.network/421294Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 7月, 2016 2 次提交
-
-
Install the callbacks via the state machine and let the core invoke the callbacks on the already online CPUs. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-s390@vger.kernel.org Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160713153334.518084858@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Thomas Gleixner 提交于
Install the callbacks via the state machine and let the core invoke the callbacks on the already online CPUs. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Reviewed-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Adam Buchbinder <adam.buchbinder@gmail.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Cc: linux-s390@vger.kernel.org Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160713153334.436370635@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 13 7月, 2016 1 次提交
-
-
由 Peter Oberparleiter 提交于
Use the same code structure when determining preferred consoles for Linux running as KVM guest as with Linux running in LPAR and z/VM guest: - Extend the console_mode variable to cover vt220 and hvc consoles - Determine sensible console defaults in conmode_default() - Remove KVM-special handling in set_preferred_console() Ensure that the sclp line mode console is also registered when the vt220 console was selected to not change existing behavior that someone might be relying on. As an externally visible change, KVM guest users can now select the 3270 or 3215 console devices using the conmode= kernel parameter, provided that support for the corresponding driver was compiled into the kernel. Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NJing Liu <liujbjl@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 04 7月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
After linking there are several symbols for the same address that the __switch_to symbol points to. E.g.: 000000000089b9c0 T __kprobes_text_start 000000000089b9c0 T __lock_text_end 000000000089b9c0 T __lock_text_start 000000000089b9c0 T __sched_text_end 000000000089b9c0 T __switch_to When disassembling with "objdump -d" this results in a missing __switch_to function. It would be named __kprobes_text_start instead. To unconfuse objdump add a nop in front of the kprobes text section. That way __switch_to appears again. Obviously this solution is sort of a hack, since it also depends on link order if this works or not. However it is the best I can come up with for now. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Expose the maximum thread id with /proc/cpuinfo. With the new line the output looks like this: vendor_id : IBM/S390 bogomips per cpu: 20325.00 max thread id : 1 With this new interface it is possible to always tell the correct number of cpu threads potentially being used by the kernel. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 6月, 2016 8 次提交
-
-
由 Heiko Carstens 提交于
Avoid using the address of a process' thread_info structure as the kernel stack address. This will break as soon as the thread_info structure will be removed from the stack, and in addition it makes the code a bit more understandable. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Remove a leftover from the code that transferred a couple of TIF bits from the previous task to the next task. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Now that hopefully all inline assemblies have been converted to single basic blocks we can enable kcov on s390. Note that this patch does not disable as many files on s390 like the x86 variant does. Right now I didn't see a reason to do that, however additional files or directories can be excluded at any time. The runtime overhead seems to be quite high. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use only simple inline assemblies which consist of a single basic block if the register asm construct is being used. Otherwise gcc would generate broken code if the compiler option --sanitize-coverage=trace-pc would be used. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use only simple inline assemblies which consist of a single basic block if the register asm construct is being used. Otherwise gcc would generate broken code if the compiler option --sanitize-coverage=trace-pc would be used. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The cpu field name within /proc/cpuinfo has a conflict with the powerpc and sparc output where it contains the cpu model name. So rename the field name to cpu number which shouldn't generate any conflicts. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Now that the oprofile sampling code is gone there is only one user of the sampling facility left. Therefore the reserve and release functions can be removed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
This reverts commit 852ffd0f. There are use cases where an intermediate boot kernel (1) uses kexec to boot the final production kernel (2). For this scenario we should provide the original boot information to the production kernel (2). Therefore clearing the boot information during kexec() should not be done. Cc: stable@vger.kernel.org # v3.17+ Reported-by: NSteffen Maier <maier@linux.vnet.ibm.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 6月, 2016 1 次提交
-
-
由 Paul Moore 提交于
When executing s390 code on s390x the syscall arguments are not properly masked, leading to some malformed audit records. Signed-off-by: NPaul Moore <paul@paul-moore.com>
-
- 16 6月, 2016 1 次提交
-
-
由 Hendrik Brueckner 提交于
On s390, there are two different hardware PMUs for counting and sampling. Previously, both PMUs have shared the perf_hw_context which is not correct and, recently, results in this warning: ------------[ cut here ]------------ WARNING: CPU: 5 PID: 1 at kernel/events/core.c:8485 perf_pmu_register+0x420/0x428 Modules linked in: CPU: 5 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc1+ #2 task: 00000009c5240000 ti: 00000009c5234000 task.ti: 00000009c5234000 Krnl PSW : 0704c00180000000 0000000000220c50 (perf_pmu_register+0x420/0x428) R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3 Krnl GPRS: ffffffffffffffff 0000000000b15ac6 0000000000000000 00000009cb440000 000000000022087a 0000000000000000 0000000000b78fa0 0000000000000000 0000000000a9aa90 0000000000000084 0000000000000005 000000000088a97a 0000000000000004 0000000000749dd0 000000000022087a 00000009c5237cc0 Krnl Code: 0000000000220c44: a7f4ff54 brc 15,220aec 0000000000220c48: 92011000 mvi 0(%r1),1 #0000000000220c4c: a7f40001 brc 15,220c4e >0000000000220c50: a7f4ff12 brc 15,220a74 0000000000220c54: 0707 bcr 0,%r7 0000000000220c56: 0707 bcr 0,%r7 0000000000220c58: ebdff0800024 stmg %r13,%r15,128(%r15) 0000000000220c5e: a7f13fe0 tmll %r15,16352 Call Trace: ([<000000000022087a>] perf_pmu_register+0x4a/0x428) ([<0000000000b2c25c>] init_cpum_sampling_pmu+0x14c/0x1f8) ([<0000000000100248>] do_one_initcall+0x48/0x140) ([<0000000000b25d26>] kernel_init_freeable+0x1e6/0x2a0) ([<000000000072bda4>] kernel_init+0x24/0x138) ([<000000000073495e>] kernel_thread_starter+0x6/0xc) ([<0000000000734958>] kernel_thread_starter+0x0/0xc) Last Breaking-Event-Address: [<0000000000220c4c>] perf_pmu_register+0x41c/0x428 ---[ end trace 0c6ef9f5b771ad97 ]--- Using the perf_sw_context is an option because the cpum_cf PMU does not use interrupts. To make this more clear, initialize the capabilities in the PMU structure. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Suggested-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 6月, 2016 4 次提交
-
-
由 Heiko Carstens 提交于
The last in-kernel user is gone so we can finally remove this code. Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Implement calculation of loops_per_jiffies with fp instructions which are available on all 64 bit machines. To save and restore floating point register context use the new vx support functions. Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Kees Cook 提交于
Close the hole where ptrace can change a syscall out from under seccomp. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linux-s390@vger.kernel.org
-
由 Andy Lutomirski 提交于
Currently, if arch code wants to supply seccomp_data directly to seccomp (which is generally much faster than having seccomp do it using the syscall_get_xyz() API), it has to use the two-phase seccomp hooks. Add it to the easy hooks, too. Cc: linux-arch@vger.kernel.org Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 14 6月, 2016 1 次提交
-
-
由 Hendrik Brueckner 提交于
Introduce the kernel_fpu_begin() and kernel_fpu_end() function to enclose any in-kernel use of FPU instructions and registers. In enclosed sections, you can perform floating-point or vector (SIMD) computations. The functions take care of saving and restoring FPU register contents and controls. For usage details, see the guidelines in arch/s390/include/asm/fpu/api.h Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 13 6月, 2016 14 次提交
-
-
由 Heiko Carstens 提交于
I don't have a z10 to test this anymore, so I have no idea if the code works at all or even crashes. I can try to emulate, but it is just guess work. Nor do we know if the z10 special handling is performance wise still better than the generic handling. There have been a lot of changes to the scheduler. Therefore let's play safe and remove the special handling. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The z13 machine added a fourth level to the cpu topology information. The new top level is called drawer. A drawer contains two books, which used to be the top level. Adding this additional scheduling domain did show performance improvements for some workloads of up to 8%, while there don't seem to be any workloads impacted in a negative way. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Rename DIAG308_IPL and DIAG308_DUMP to DIAG308_LOAD_CLEAR and DIAG308_LOAD_NORMAL_DUMP to better reflect the associated IPL functions. Suggested-by: NCornelia Huck <cornelia.huck@de.ibm.com> Suggested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Avoid clearing memory for CCW-type re-ipl within a logical partition. This can save a significant amount of time if a logical partition contains a lot of memory. On the other hand we still clear memory if running within a second level hypervisor, since the hypervisor can simply free all memory that was used for the guest. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
We have some inline assemblies where the extable entry points to a label at the end of an inline assembly which is not followed by an instruction. On the other hand we have also inline assemblies where the extable entry points to the first instruction of an inline assembly. If a first type inline asm (extable point to empty label at the end) would be directly followed by a second type inline asm (extable points to first instruction) then we would have two different extable entries that point to the same instruction but would have a different target address. This can lead to quite random behaviour, depending on sorting order. I verified that we currently do not have such collisions within the kernel. However to avoid such subtle bugs add a couple of nop instructions to those inline assemblies which contain an extable that points to an empty label. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Use dynamically allocated irq descriptors on s390 which allows us to get rid of the s390 specific config option PCI_NR_MSI and exploit more MSI interrupts. Also the size of the kernel image is reduced by 131K (using performance_defconfig). Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Small cleanup patch to use the shorter __section macro everywhere. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
On s390 __ro_after_init is currently mapped to __read_mostly which means that data marked as __ro_after_init will not be protected. Reason for this is that the common code __ro_after_init implementation is x86 centric: the ro_after_init data section was added to rodata, since x86 enables write protection to kernel text and rodata very late. On s390 we have write protection for these sections enabled with the initial page tables. So adding the ro_after_init data section to rodata does not work on s390. In order to make __ro_after_init work properly on s390 move the ro_after_init data, right behind rodata. Unlike the rodata section it will be marked read-only later after all init calls happened. This s390 specific implementation adds new __start_ro_after_init and __end_ro_after_init labels. Everything in between will be marked read-only after the init calls happened. In addition to the __ro_after_init data move also the exception table there, since from a practical point of view it fits the __ro_after_init requirements. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
ptep_flush_lazy and pmdp_flush_lazy use mm->context.attach_count to decide between a lazy TLB flush vs an immediate TLB flush. The field contains two 16-bit counters, the number of CPUs that have the mm attached and can create TLB entries for it and the number of CPUs in the middle of a page table update. The __tlb_flush_asce, ptep_flush_direct and pmdp_flush_direct functions use the attach counter and a mask check with mm_cpumask(mm) to decide between a local flush local of the current CPU and a global flush. For all these functions the decision between lazy vs immediate and local vs global TLB flush can be based on CPU masks. There are two masks: the mm->context.cpu_attach_mask with the CPUs that are actively using the mm, and the mm_cpumask(mm) with the CPUs that have used the mm since the last full flush. The decision between lazy vs immediate flush is based on the mm->context.cpu_attach_mask, to decide between local vs global flush the mm_cpumask(mm) is used. With this patch all checks will use the CPU masks, the old counter mm->context.attach_count with its two 16-bit values is turned into a single counter mm->context.flush_count that keeps track of the number of CPUs with incomplete page table updates. The sole user of this counter is finish_arch_post_lock_switch() which waits for the end of all page table updates. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The External-Time-Reference (ETR) clock synchronization interface has been superseded by Server-Time-Protocol (STP). Remove the outdated ETR interface. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The PTFF instruction can be used to retrieve information about UTC including the current number of leap seconds. Use this value to convert the coordinated server time value of the TOD clock to a proper UTC timestamp to initialize the system time. Without this correction the system time will be off by the number of leap seonds until it has been corrected via NTP. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
It is possible to specify a user offset for the TOD clock, e.g. +2 hours. The TOD clock will carry this offset even if the clock is synchronized with STP. This makes the time stamps acquired with get_sync_clock() useless as another LPAR migth use a different TOD offset. Use the PTFF instrution to get the TOD epoch difference and subtract it from the TOD clock value to get a physical timestamp. As the epoch difference contains the sync check delta as well the LPAR offset value to the physical clock needs to be refreshed after each clock synchronization. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The sync clock operation of the channel subsystem call for STP delivers the TOD clock difference as a result. Use this TOD clock difference instead of the difference between the TOD timestamps before and after the sync clock operation. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Reducing the size of reserved memory for the crash kernel will result in an immediate crash on s390. Reason for that is that we do not create struct pages for memory that is reserved. If that memory is freed any access to struct pages which correspond to this memory will result in invalid memory accesses and a kernel panic. Fix this by properly creating struct pages when the system gets initialized. Change the code also to make use of set_memory_ro() and set_memory_rw() so page tables will be split if required. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-