- 23 2月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
This is just a preparation patch in order to keep the "restore address space after syscall" patch small. Rename CIF_ASCE to CIF_ASCE_PRIMARY to be unique and specific when introducing a second CIF_ASCE_SECONDARY CIF flag. Suggested-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 2月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
Fix PER tracing of system calls after git commit 34525e1f "s390: store breaking event address only for program checks" broke it. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 2月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
Bit 0x100 of a page table, segment table of region table entry can be used to disallow code execution for the virtual addresses associated with the entry. There is one tricky bit, the system call to return from a signal is part of the signal frame written to the user stack. With a non-executable stack this would stop working. To avoid breaking things the protection fault handler checks the opcode that caused the fault for 0x0a77 (sys_sigreturn) and 0x0aad (sys_rt_sigreturn) and injects a system call. This is preferable to the alternative solution with a stub function in the vdso because it works for vdso=off and statically linked binaries as well. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 31 1月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
The principles of operations specifies that the breaking event address is stored to the address 0x110 in the prefix page only for program checks. The last branch in user space is lost as soon as a branch in kernel space is executed after e.g. an svc. This makes it impossible to accurately maintain the breaking event address for a user space process. Simplify the code, just copy the current breaking event address from 0x110 to the task structure for program checks from user space. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 12月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 12月, 2016 1 次提交
-
-
由 Martin Schwidefsky 提交于
For system damage machine checks or machine checks due to invalid PSW fields the system will be stopped. In order to get an oops message out before killing the system the machine check handler branches to .Lmcck_panic, switches to the panic stack and then does the usual machine check handling. The switch to the panic stack is incomplete, the stack pointer in %r15 is replaced, but the pt_regs pointer in %r11 is not. The result is a program check which will kill the system in a slightly different way. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 11月, 2016 1 次提交
-
-
由 Martin Schwidefsky 提交于
The LAST_BREAK macro in entry.S uses a different instruction sequence for CONFIG_MARCH_Z900 builds. The branch target offset to skip the store of the last breaking event address needs to take the different length of the code block into account. Fixes: f8fc82b4 ("s390: move sys_call_table and last_break from thread_info to thread_struct") Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 11月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
We have the s390 specific THREAD_ORDER define and the THREAD_SIZE_ORDER define which is also used in common code. Both have exactly the same semantics. Therefore get rid of THREAD_ORDER and always use THREAD_SIZE_ORDER instead. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 11月, 2016 1 次提交
-
-
由 Martin Schwidefsky 提交于
Move the last two architecture specific fields from the thread_info structure to the thread_struct. All that is left in thread_info is the flags field. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 11月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
This is the s390 variant of commit 15f4eae7 ("x86: Move thread_info into task_struct"). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Convert s390 to use a field in the struct lowcore for the CPU preemption count. It is a bit cheaper to access a lowcore field compared to a thread_info variable and it removes the depencency on a task related structure. bloat-o-meter on the vmlinux image for the default configuration (CONFIG_PREEMPT_NONE=y) reports a small reduction in text size: add/remove: 0/0 grow/shrink: 18/578 up/down: 228/-5448 (-5220) A larger improvement is achieved with the default configuration but with CONFIG_PREEMPT=y and CONFIG_DEBUG_PREEMPT=n: add/remove: 2/6 grow/shrink: 59/4477 up/down: 1618/-228762 (-227144) Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 8月, 2016 1 次提交
-
-
由 Al Viro 提交于
Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 04 7月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
After linking there are several symbols for the same address that the __switch_to symbol points to. E.g.: 000000000089b9c0 T __kprobes_text_start 000000000089b9c0 T __lock_text_end 000000000089b9c0 T __lock_text_start 000000000089b9c0 T __sched_text_end 000000000089b9c0 T __switch_to When disassembling with "objdump -d" this results in a missing __switch_to function. It would be named __kprobes_text_start instead. To unconfuse objdump add a nop in front of the kprobes text section. That way __switch_to appears again. Obviously this solution is sort of a hack, since it also depends on link order if this works or not. However it is the best I can come up with for now. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 6月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
Remove a leftover from the code that transferred a couple of TIF bits from the previous task to the next task. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 3月, 2016 1 次提交
-
-
由 Martin Schwidefsky 提交于
There is a tricky interaction between the machine check handler and the critical sections of load_fpu_regs and save_fpu_regs functions. If the machine check interrupts one of the two functions the critical section cleanup will complete the function before the machine check handler s390_do_machine_check is called. Trouble is that the machine check handler needs to validate the floating point registers *before* and not *after* the completion of load_fpu_regs/save_fpu_regs. The simplest solution is to rewind the PSW to the start of the load_fpu_regs/save_fpu_regs and retry the function after the return from the machine check handler. Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Cc: <stable@vger.kernel.org> # 4.3+ Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 02 3月, 2016 1 次提交
-
-
由 Christian Borntraeger 提交于
commit e22cf8ca ("s390/cpumf: rework program parameter setting to detect guest samples") requires guest changes to get proper guest/host. We can do better: We can use the primary asn value, which is set on all Linux variants to compare this with the host pp value. We now have the following cases: 1. Guest using PP host sample: gpp == 0, asn == hpp --> host guest sample: gpp != 0 --> guest 2. Guest not using PP host sample: gpp == 0, asn == hpp --> host guest sample: gpp == 0, asn != hpp --> guest As soon as the host no longer sets CR4, we must back out this heuristics - let's add a comment in switch_to. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 11月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
It does not make sense to try to relinquish the time slice with diag 0x9c to a CPU in a state that does not allow to schedule the CPU. The scenario where this can happen is a CPU waiting in udelay/mdelay while holding a spin-lock. Add a CIF bit to tag a CPU in enabled wait and use it to detect that the yield of a CPU will not be successful and skip the diagnose call. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 10月, 2015 5 次提交
-
-
由 Heiko Carstens 提交于
When using systemtap it was observed that our udelay implementation is rather suboptimal if being called from a kprobe handler installed by systemtap. The problem observed when a kprobe was installed on lock_acquired(). When the probe was hit the kprobe handler did call udelay, which set up an (internal) timer and reenabled interrupts (only the clock comparator interrupt) and waited for the interrupt. This is an optimization to avoid that the cpu is busy looping while waiting that enough time passes. The problem is that the interrupt handler still does call irq_enter()/irq_exit() which then again can lead to a deadlock, since some accounting functions may take locks as well. If one of these locks is the same, which caused lock_acquired() to be called, we have a nice deadlock. This patch reworks the udelay code for the interrupts disabled case to immediately leave the low level interrupt handler when the clock comparator interrupt happens. That way no C code is being called and the deadlock cannot happen anymore. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
The program parameter can be used to mark hardware samples with some token. Previously, it was used to mark guest samples only. Improve the program parameter doubleword by combining two parts, the leftmost LPP part and the rightmost PID part. Set the PID part for processes by using the task PID. To distinguish host and guest samples for the kernel (PID part is zero), the guest must always set the program paramater to a non-zero value. Use the leftmost bit in the LPP part of the program parameter to be able to detect guest kernel samples. [brueckner@linux.vnet.ibm.com]: Split __LC_CURRENT and introduced __LC_LPP. Corrected __LC_CURRENT users and adjusted assembler parts. And updated the commit message accordingly. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Various functions in entry.S perform test-under-mask instructions to test for particular bits in memory. Because test-under-mask uses a mask value of one byte, the mask value and the offset into the memory must be calculated manually. This easily introduces errors and is hard to review and read. Introduce the TSTMSK assembler macro to specify a mask constant and let the macro calculate the offset and the byte mask to generate a test-under-mask instruction. The benefit is that existing symbolic constants can now be used for tests. Also the macro checks for zero mask values and mask values that consist of multiple bytes. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Previously, the init task did not have an allocated FPU save area and saving an FPU state was not possible. Now if the vector extension is always enabled, provide a static FPU save area to save FPU states of vector instructions that can be executed quite early. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
If the kernel detects that the s390 hardware supports the vector facility, it is enabled by default at an early stage. To force it off, use the novx kernel parameter. Note that there is a small time window, where the vector facility is enabled before it is forced to be off. With enabling the vector facility by default, the FPU save and restore functions can be improved. They do not longer require to manage expensive control register updates to enable or disable the vector enablement control for particular processes. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 30 9月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
The calculation for the SMT scaling factor for a hardware thread which has been partially idle needs to disregard the cycles spent by the other threads of the core while the thread is idle. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 17 9月, 2015 1 次提交
-
-
由 Heiko Carstens 提交于
The critical section cleanup code misses to add the offset of the thread_struct to the task address. Therefore, if the critical section code gets executed, it may corrupt the task struct or restore the contents of the floating point registers from the wrong memory location. Fixes d0164ee2 "s390/kernel: remove save_fpu_regs() parameter and use __LC_CURRENT instead". Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 8月, 2015 2 次提交
-
-
由 Christian Borntraeger 提交于
Right now we use the address of the sie control block as tag for the sampling data. This is hard to get for users. Let's just use the PID of the cpu thread to mark the hardware samples. Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
All calls to save_fpu_regs() specify the fpu structure of the current task pointer as parameter. The task pointer of the current task can also be retrieved from the CPU lowcore directly. Remove the parameter definition, load the __LC_CURRENT task pointer from the CPU lowcore, and rebase the FPU structure onto the task structure. Apply the same approach for the load_fpu_regs() function. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 7月, 2015 5 次提交
-
-
由 Martin Schwidefsky 提交于
If a machine checks is received while the CPU is in the kernel, only the s390_do_machine_check function will be called. The call to s390_handle_mcck is postponed until the CPU returns to user space. Because of this it is safe to use the asynchronous stack for machine checks even if the CPU is already handling an interrupt. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Reorder the instructions of UPDATE_VTIME to improve superscalar execution, remove duplicate checks for problem-state from the asynchronous interrupt handlers, and move the check for problem-state from the synchronous exit path to the program check path as it is only needed for program checks inside the kernel. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Currently there are two mechanisms to deal with cleanup work due to interrupts. The HANDLE_SIE_INTERCEPT macro is used to undo the changes required to enter SIE in sie64a. If the SIE instruction causes a program check, or an asynchronous interrupt is received the HANDLE_SIE_INTERCEPT code forwards the program execution to sie_exit. All the other critical sections in entry.S are handled by the code in cleanup_critical that is called by the SWITCH_ASYNC macro. Move the sie64a function to the beginning of the critical section and add the code from HANDLE_SIE_INTERCEPT to cleanup_critical. Add a special case for the sie64a cleanup to the program check handler. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The HANDLE_SIE_INTERCEPT macro is used in the interrupt handlers and the program check handler to undo a few changes done by sie64a. Among them are guest vs host LPP, the gmap ASCE vs kernel ASCE and the bit that indicates that SIE is currently running on the CPU. There is a race of a voluntary SIE exit vs asynchronous interrupts. If the CPU completed the SIE instruction and the TM instruction of the LPP macro at the time it receives an interrupt, the interrupt handler will run while the LPP, the ASCE and the SIE bit are still set up for guest execution. This might result in wrong sampling data, but it will not cause data corruption or lockups. The critical section in sie64a needs to be enlarged to include all instructions that undo the changes required for guest execution. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Improve the save and restore behavior of FPU register contents to use the vector extension within the kernel. The kernel does not use floating-point or vector registers and, therefore, saving and restoring the FPU register contents are performed for handling signals or switching processes only. To prepare for using vector instructions and vector registers within the kernel, enhance the save behavior and implement a lazy restore at return to user space from a system call or interrupt. To implement the lazy restore, the save_fpu_regs() sets a CPU information flag, CIF_FPU, to indicate that the FPU registers must be restored. Saving and setting CIF_FPU is performed in an atomic fashion to be interrupt-safe. When the kernel wants to use the vector extension or wants to change the FPU register state for a task during signal handling, the save_fpu_regs() must be called first. The CIF_FPU flag is also set at process switch. At return to user space, the FPU state is restored. In particular, the FPU state includes the floating-point or vector register contents, as well as, vector-enablement and floating-point control. The FPU state restore and clearing CIF_FPU is also performed in an atomic fashion. For KVM, the restore of the FPU register state is performed when restoring the general-purpose guest registers before the SIE instructions is started. Because the path towards the SIE instruction is interruptible, the CIF_FPU flag must be checked again right before going into SIE. If set, the guest registers must be reloaded again by re-entering the outer SIE loop. This is the same behavior as if the SIE critical section is interrupted. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 7月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
git commit 0c8c0f03 "x86/fpu, sched: Dynamically allocate 'struct fpu'" moved the thread_struct to the end of the task_struct. This causes some of the offsets used in entry.S to overflow their instruction operand field. To fix this use aghi to create a dedicated pointer for the thread_struct. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 5月, 2015 1 次提交
-
-
由 Christian Borntraeger 提交于
exit_sie_sync is used to kick CPUs out of SIE and prevent reentering at any point in time. This is used to reload the prefix pages and to set the IBS stuff in a way that guarantees that after this function returns we are no longer in SIE. All current users trigger KVM requests. The request must be set before we block the CPUs to avoid races. Let's make this implicit by adding the request into a new function kvm_s390_sync_requests that replaces exit_sie_sync and split out s390_vcpu_block and s390_vcpu_unblock, that can be used to keep CPUs out of SIE independent of requests. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
-
- 25 3月, 2015 2 次提交
-
-
由 Heiko Carstens 提交于
Remove the 31 bit syscalls from the syscall table. This is a separate patch just in case I screwed something up so it can be easily reverted. However the conversion was done with a script, so everything should be ok. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Rename a couple of files to get rid of the "64" suffix. "git blame" will still work. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 12月, 2014 1 次提交
-
-
由 Martin Schwidefsky 提交于
To improve the output of the perf tool hide most of the symbols from entry[64].S by using the '.L' prefix. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 9月, 2014 1 次提交
-
-
由 Jan Willeke 提交于
Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 5月, 2014 2 次提交
-
-
由 Martin Schwidefsky 提交于
The oi and ni instructions used in entry[64].S to set and clear bits in the thread-flags are not guaranteed to be atomic in regard to other CPUs. Split the TIF bits into CPU, pt_regs and thread-info specific bits. Updates on the TIF bits are done with atomic instructions, updates on CPU and pt_regs bits are done with non-atomic instructions. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Always switch to the kernel ASCE in switch_mm. Load the secondary space ASCE in finish_arch_post_lock_switch after checking that any pending page table operations have completed. The primary ASCE is loaded in entry[64].S. With this the update_primary_asce call can be removed from the switch_to macro and from the start of switch_mm function. Remove the load_primary argument from update_user_asce/clear_user_asce, rename update_user_asce to set_user_asce and rename update_primary_asce to load_kernel_asce. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 4月, 2014 1 次提交
-
-
由 Jens Freimann 提交于
per_perc_atmid is currently a two-byte field that combines two fields, the PER code and the PER Addressing-and-Translation-Mode Identification (ATMID) Let's make them accessible indepently and also rename per_cause to per_code. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-