- 10 5月, 2011 1 次提交
-
-
由 Michael Holzheu 提交于
When starting a new CPU we currently jump to start_secondary() without setting register 14 (the return address) correctly. Therefore on the stack frame for start_secondary an invalid return address is stored. This leads to wrong stack back traces in kernel dumps. Example: #00 [1f33fe48] cpu_idle at 10614a #01 [1f33fe90] start_secondary at 54fa88 #02 [1f33feb8] (null) at 0 <--- invalid To fix this start_secondary() is called now with basr/brasl that sets register 14 correctly. The output of the stack backtrace looks then like the following: #00 [1f33fe48] cpu_idle at 10614a #01 [1f33fe90] start_secondary at 54fa88 #02 [1f33feb8] restart_base at 54f41e <--- correct Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 05 1月, 2011 2 次提交
-
-
由 Martin Schwidefsky 提交于
Overhaul program event recording and the code dealing with the ptrace user space interface. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Add kprobes annotations to get the massive 'probe kernel.function("*") {}' stress test working. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 10月, 2010 1 次提交
-
-
由 Martin Schwidefsky 提交于
Fix kprobes after git commit 1e54622e broke it. The kprobe_handler is now called with interrupts in the state at the time of the breakpoint. The single step of the replaced instruction is done with interrupts off which makes it necessary to enable and disable the interupts in the kprobes code. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 10月, 2010 3 次提交
-
-
由 Martin Schwidefsky 提交于
Do the setup of the stack overflow argument for the sixth system call parameter right before the branch to the system call function. That simplifies the system call parameter access code. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Read external interrupts parameters from the lowcore in the first level interrupt handler in entry[64].S. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Read all required fields for program checks from the lowcore in the first level interrupt handler in entry[64].S. If the context that caused the fault was enabled for interrupts we can now re-enable the irqs in entry[64].S. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 7月, 2010 1 次提交
-
-
由 Heiko Carstens 提交于
In case user space is single stepped (PER) the program check handler claims too early that IRQs are enabled on the return path. Subsequent checks will notice that the IRQ mask in the PSW and what lockdep thinks the IRQ mask should be do not correlate and therefore will print a warning to the console and disable lockdep. Fix this by doing all the work within the correct context. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 5月, 2010 1 次提交
-
-
由 Heiko Carstens 提交于
This config option enables or disables three single instructions which aren't expensive. This is too fine grained. Besided that everybody who uses kvm would enable it anyway in order to debug performance problems. Just remove it. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 17 5月, 2010 5 次提交
-
-
由 Martin Schwidefsky 提交于
Copy the last breaking event address from the lowcore to a new field in the thread_struct on each system entry. Add a new ptrace request PTRACE_GET_LAST_BREAK and a new utrace regset REGSET_LAST_BREAK to query the last breaking event. This is useful for debugging wild branches in user space code. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Carsten Otte 提交于
Use the SPP instruction to set a tag on entry to / exit of the virtual machine context. This allows the cpu measurement facility to distinguish the samples from the host and the different guests. Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
-
由 Martin Schwidefsky 提交于
A machine check can interrupt the i/o and external interrupt handler anytime. If the machine check occurs while the interrupt handler is waking up from idle vtime_start_cpu can get executed a second time and the int_clock / async_enter_timer values in the lowcore get clobbered. This can confuse the cpu time accounting. To fix this problem two changes are needed. First the machine check handler has to use its own copies of int_clock and async_enter_timer, named mcck_clock and mcck_enter_timer. Second the nested execution of vtime_start_cpu has to be prevented. This is done in s390_idle_check by checking the wait bit in the program status word. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The system call path in entry[64].S is run with interrupts enabled. Remove the irq tracing check from the system call exit code. If a program check interrupted a context enabled for interrupts do a call to trace_irq_off_caller in the program check handler before branching to the system call exit code. Restructure the system call and io interrupt return code to avoid avoid the lpsw[e] to disable machine checks. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Cleanup the #ifdef mess at io_work in entry[64].S and streamline the TIF work code of the system call and io exit path. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 4月, 2010 1 次提交
-
-
由 Martin Schwidefsky 提交于
If a machine check interrupts the io interrupt handler on one of the instructions between io_return and io_leave the critical section cleanup code will move the return psw to io_work_loop. By doing that the switch from the asynchronous interrupt stack to the process stack is skipped. If e.g. TIF_NEED_RESCHED is set things break because the scheduler is called with the asynchronous interrupts stack. Moving the psw back to io_return instead fixes the problem. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 2月, 2010 1 次提交
-
-
由 Heiko Carstens 提交于
Use asm offsets to make sure the offset defines to struct _lowcore and its layout don't get out of sync. Also add a BUILD_BUG_ON() which checks that the size of the structure is sane. And while being at it change those sites which use odd casts to access the current lowcore. These should use S390_lowcore instead. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 1月, 2010 1 次提交
-
-
由 Martin Schwidefsky 提交于
If irq flags tracing is enabled the TRACE_IRQS_ON macros expands to a function call which clobbers registers %r0-%r5. The macro is used in the code path for single stepped system calls. The argument registers %r2-%r6 need to be restored from the stack before the system call function is called. Cc: stable@kernel.org Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 13 11月, 2009 1 次提交
-
-
由 Christian Borntraeger 提交于
On s390 there are two ways of specifying the system call number for the svc instruction. The standard way is to use the immediate field in the instruction (or to use EXecute for values unknown during assemble time). This can encode 256 system calls. The kernel ABI also allows to put the system call number in r1 and then execute svc 0 to enable system call numbers > 255. It turns out that single stepping svc 0 is broken, since the PER program check handler uses r1. We have to use a different register. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 9月, 2009 1 次提交
-
-
由 Hendrik Brueckner 提交于
The sysc_restore_trace_psw and io_restore_trace_psw storage locations are created in the .text section. When creating and IPLing from a named saved system (NSS), writing to these locations causes a protection exception (because the .text section is mapped as shared read-only in the NSS). To permit write access, move the storage locations into the .data section. The problem occurs only when CONFIG_TRACE_IRQFLAGS is set. The git commmit that has introduced these variables is: 411788eaSigned-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 8月, 2009 1 次提交
-
-
由 Josh Stone 提交于
s/HAVE_FTRACE_SYSCALLS/HAVE_SYSCALL_TRACEPOINTS/g s/TIF_SYSCALL_FTRACE/TIF_SYSCALL_TRACEPOINT/g The syscall enter/exit tracing is no longer specific to just ftrace, so they now have names that reflect their tie to tracepoints instead. Signed-off-by: NJosh Stone <jistone@redhat.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Jiaying Zhang <jiayingz@google.com> Cc: Martin Bligh <mbligh@google.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> LKML-Reference: <1251150194-1713-2-git-send-email-jistone@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 12 6月, 2009 2 次提交
-
-
由 Heiko Carstens 提交于
System call tracer support for s390. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Enable secure computing on s390 as well. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 4月, 2009 1 次提交
-
-
由 Martin Schwidefsky 提交于
Reset the cpu timer to the maximum value and correctly initialize the cpu accounting values in the lowcore when the cpu is started. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 31 12月, 2008 2 次提交
-
-
由 Martin Schwidefsky 提交于
The extract cpu time instruction (ectg) instruction allows the user process to get the current thread cputime without calling into the kernel. The code that uses the instruction needs to switch to the access registers mode to get access to the per-cpu info page that contains the two base values that are needed to calculate the current cputime from the CPU timer with the ectg instruction. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Distinguish the cputime of the idle process where idle is actually using cpu cycles from the cputime where idle is sleeping on an enabled wait psw. The former is accounted as system time, the later as idle time. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 12月, 2008 2 次提交
-
-
由 Martin Schwidefsky 提交于
On s390 we always want to run with precise cputime accounting. Remove the config options VIRT_TIMER and VIRT_CPU_ACCOUNTING. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Al Viro 提交于
On s390 we have ret_from_fork jump not to the "do all work we normally do on return from syscall" as on x86, ppc, etc., but to the "do all such work except audit". Historical reasons - the codepath triggered when we have AUDIT process flag set is separated from the normall one and they converge at sysc_return, which is the common part of post-syscall work. And does not include calling audit_syscall_exit() - that's done in the end of sysc_tracesys path, just before that path jumps to sysc_return. IOW, the child returning from fork()/clone()/vfork() doesn't call audit_syscall_exit() at all, so no matter what we do with its audit context, we are not going to see the audit entry. The fix is simple: have ret_from_fork go to the point just past the call of sys_.... in the 'we have AUDIT flag set' path. There we have (64bit variant; for 31bit the situation is the same): sysc_tracenogo: tm __TI_flags+7(%r9),(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT) jz sysc_return la %r2,SP_PTREGS(%r15) # load pt_regs larl %r14,sysc_return # return point is sysc_return jg do_syscall_trace_exit which is precisely what we need - check the flag, bugger off to sysc_return if not set, otherwise call do_syscall_trace_exit() and bugger off to sysc_return. r9 has just been properly set by ret_from_fork itself, so we are fine. Tested on s390x, seems to work fine. WARNING: it's been about 16 years since my last contact with 3X0 assembler[1], so additional review would be very welcome. I don't think I've managed to screw it up, but... [1] that *was* in another country and besides, the box is dead... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 11月, 2008 1 次提交
-
-
由 Martin Schwidefsky 提交于
syscall_get_nr() currently returns a valid result only if the call chain of the traced process includes do_syscall_trace_enter(). But collect_syscall() can be called for any sleeping task, the result of syscall_get_nr() in general is completely bogus. To make syscall_get_nr() work for any sleeping task the traps field in pt_regs is replace with svcnr - the system call number the process is executing. If svcnr == 0 the process is not on a system call path. The syscall_get_arguments and syscall_set_arguments use regs->gprs[2] for the first system call parameter. This is incorrect since gprs[2] may have been overwritten with the system call number if the call chain includes do_syscall_trace_enter. Use regs->orig_gprs2 instead. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 11月, 2008 1 次提交
-
-
由 Heiko Carstens 提交于
With CONFIG_IRQSOFF_TRACER the trace_hardirqs_off() function includes a call to __builtin_return_address(1). But we calltrace_hardirqs_off() from early entry code. There we have just a single stack frame. So this results in a kernel stack backchain walk that would walk beyond the kernel stack. Following the NULL terminated backchain this results in a lowcore read access. To fix this we simply call trace_hardirqs_off_caller() and pass the current instruction pointer. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 10月, 2008 1 次提交
-
-
由 Martin Schwidefsky 提交于
* System call parameter and result access functions * Add tracehook calls * Split syscall_trace into two functions do_syscall_trace_enter and do_syscall_trace_exit Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 5月, 2008 2 次提交
-
-
由 Christian Borntraeger 提交于
From: Martin Schwidefsky <schwidefsky@de.ibm.com> This patch fixes a bug with cpu bound guest on kvm-s390. Sometimes it was impossible to deliver a signal to a spinning guest. We used preemption as a circumvention. The preemption notifiers called vcpu_load, which checked for pending signals and triggered a host intercept. But even with preemption, a sigkill was not delivered immediately. This patch changes the low level host interrupt handler to check for the SIE instruction, if TIF_WORK is set. In that case we change the instruction pointer of the return PSW to rerun the vcpu_run loop. The kvm code sees an intercept reason 0 if that happens. This patch adds accounting for these types of intercept as well. The advantages: - works with and without preemption - signals are delivered immediately - much better host latencies without preemption Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
On return from syscall or interrupt, we have to check if we return to userspace (likely) and if there is work todo (less likely) to decide if we handle the work. We can optimize this check: we first check for the less likely work case and then check for userspace. This patch is also a preparation for an additional patch, that fixes a bug in KVM dealing with cpu bound guests. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 30 4月, 2008 1 次提交
-
-
由 Roland McGrath 提交于
TIF_RESTORE_SIGMASK no longer needs to be in the _TIF_WORK_* masks. Those low bits are scarce, and are all used up now. Renumber TIF_RESTORE_SIGMASK to free one up. Signed-off-by: NRoland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 4月, 2008 1 次提交
-
-
由 Christian Borntraeger 提交于
Newer s390 models have a breaking-event-address-recording register. Each time an instruction causes a break in the sequential instruction execution, the address is saved in that hardware register. On a program interrupt the address is copied to the lowcore address 272-279, which makes it software accessible. This patch changes the program check handler and the stack overflow checker to copy the value into the pt_regs argument. The oops output is enhanced to show the last known breaking address. It might give additional information if the stack trace is corrupted. The feature is only available on 64 bit. The new oops output looks like: [---------snip----------] Modules linked in: vmcp sunrpc qeth_l2 dm_mod qeth ccwgroup CPU: 2 Not tainted 2.6.24zlive-host #8 Process modprobe (pid: 4788, task: 00000000bf3d8718, ksp: 00000000b2b0b8e0) Krnl PSW : 0704200180000000 000003e000020028 (vmcp_init+0x28/0xe4 [vmcp]) R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:0 CC:2 PM:0 EA:3 Krnl GPRS: 0000000004000002 000003e000020000 0000000000000000 0000000000000001 000000000015734c ffffffffffffffff 000003e0000b3b00 0000000000000000 000003e00007ca30 00000000b5bb5d40 00000000b5bb5800 000003e0000b3b00 000003e0000a2000 00000000003ecf50 00000000b2b0bd50 00000000b2b0bcb0 Krnl Code: 000003e000020018: c0c000040ff4 larl %r12,3e0000a2000 000003e00002001e: e3e0f0000024 stg %r14,0(%r15) 000003e000020024: a7f40001 brc 15,3e000020026 >000003e000020028: e310c0100004 lg %r1,16(%r12) 000003e00002002e: c020000413dc larl %r2,3e0000a27e6 000003e000020034: c0a00004aee6 larl %r10,3e0000b5e00 000003e00002003a: a7490001 lghi %r4,1 000003e00002003e: a75900f0 lghi %r5,240 Call Trace: ([<000000000014b300>] blocking_notifier_call_chain+0x2c/0x40) [<000000000015735c>] sys_init_module+0x19d8/0x1b08 [<0000000000110afc>] sysc_noemu+0x10/0x16 [<000002000011cda2>] 0x2000011cda2 Last Breaking-Event-Address: [<000003e000020024>] vmcp_init+0x24/0xe4 [vmcp] [---------snip----------] Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 05 2月, 2008 1 次提交
-
-
由 Heiko Carstens 提交于
Fix couple of section mismatches. And since we touch the code anyway change the IPL code to use C99 initializers. Cc: Michael Holzheu <holzheu@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 11月, 2007 2 次提交
-
-
由 Heiko Carstens 提交于
When returning from IRQ handling and TIF_NEED_RESCHED is set we must call preempt_schedule_irq() instead of schedule(). Otherwise the BKL might be unlocked in schedule() and therfore everything that relies on the BKL is broken. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Current support for TRACE_IRQFLAGS and lockdep_sys_exit is broken. IRQ flag tracing is broken for program checks. Even worse is that the newly introduced calls to lockdep_sys_exit are in the critical section code which is not supposed to call any C functions. In addition the checks if locks are still held are also done when returning to kernel code which is broken as well. Fix all this by disabling interrupts and machine checks at the exit paths and then do the appropriate checks and calls. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 10月, 2007 1 次提交
-
-
由 Heiko Carstens 提交于
Run the lockdep_sys_exit hook before returning to user space. Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 7月, 2007 1 次提交
-
-
由 Heiko Carstens 提交于
If a machine check is pending and the external or I/O interrupt handler returns to userspace io_mcck_pending is going to call s390_handle_mcck. Before this happens a call to TRACE_IRQS_ON was already made since we know that we are going back to userspace and hence interrupts will be enabled. So there was an indication that interrupts are enabled while in reality they are still disabled. s390_handle_mcck will do a local_irq_save/restore pair and confuse lockdep which later complains about inconsistent irq tracing. To solve this just call trace_hardirqs_off before calling s390_handle_mcck and trace_hardirqs_on afterwards. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 7月, 2007 1 次提交
-
-
由 Martin Schwidefsky 提交于
After the in-kernel system call has been remove the system call path can be optimized. The problem state bit of the old psw is always set between system_call and sysc_do_svc. SAVE_ALL_SVC uses this information to avoid two instructions. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-