- 11 4月, 2012 3 次提交
-
-
由 Michael Holzheu 提交于
Add TASKSTATS options, enable CRASH_DUMP, and regenerate defconfig file with "make savedefconfig". Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Git commit 36409f63 "use generic RCU page-table freeing code" introduced a tlb flushing bug. Partially revert the above git commit and go back to s390 specific page table flush code. For s390 the TLB can contain three types of entries, "normal" TLB page-table entries, TLB combined region-and-segment-table (CRST) entries and real-space entries. Linux does not use real-space entries which leaves normal TLB entries and CRST entries. The CRST entries are intermediate steps in the page-table translation called translation paths. For example a 4K page access in a three-level page table setup will create two CRST TLB entries and one page-table TLB entry. The advantage of that approach is that a page access next to the previous one can reuse the CRST entries and needs just a single read from memory to create the page-table TLB entry. The disadvantage is that the TLB flushing rules are more complicated, before any page-table may be freed the TLB needs to be flushed. In short: the generic RCU page-table freeing code is incorrect for the CRST entries, in particular the check for mm_users < 2 is troublesome. This is applicable to 3.0+ kernels. Cc: <stable@vger.kernel.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Currently in the memcpy_real() function interrupts are disabled with __arch_local_irq_stnsm(). In order to notify lockdep that interrupts are disabled, with this patch local_irq_save() is used instead. The function __arch_local_irq_stnsm() is still used for switching to real mode. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 30 3月, 2012 1 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <h.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 3月, 2012 2 次提交
-
-
由 David Howells 提交于
Delete all instances of asm/system.h as they should be redundant by this point. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Disintegrate asm/system.h for S390. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: linux-s390@vger.kernel.org
-
- 24 3月, 2012 1 次提交
-
-
由 Jason Baron 提交于
The motivation for this patchset was that I was looking at a way for a qemu-kvm process, to exclude the guest memory from its core dump, which can be quite large. There are already a number of filter flags in /proc/<pid>/coredump_filter, however, these allow one to specify 'types' of kernel memory, not specific address ranges (which is needed in this case). Since there are no more vma flags available, the first patch eliminates the need for the 'VM_ALWAYSDUMP' flag. The flag is used internally by the kernel to mark vdso and vsyscall pages. However, it is simple enough to check if a vma covers a vdso or vsyscall page without the need for this flag. The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new 'VM_NODUMP' flag, which can be set by userspace using new madvise flags: 'MADV_DONTDUMP', and unset via 'MADV_DODUMP'. The core dump filters continue to work the same as before unless 'MADV_DONTDUMP' is set on the region. The qemu code which implements this features is at: http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch In my testing the qemu core dump shrunk from 383MB -> 13MB with this patch. I also believe that the 'MADV_DONTDUMP' flag might be useful for security sensitive apps, which might want to select which areas are dumped. This patch: The VM_ALWAYSDUMP flag is currently used by the coredump code to indicate that a vma is part of a vsyscall or vdso section. However, we can determine if a vma is in one these sections by checking it against the gate_vma and checking for a non-NULL return value from arch_vma_name(). Thus, freeing a valuable vma bit. Signed-off-by: NJason Baron <jbaron@redhat.com> Acked-by: NRoland McGrath <roland@hack.frob.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 3月, 2012 4 次提交
-
-
由 Martin Schwidefsky 提交于
A kernel compiled with SMP=n will not register any cpu devices. The resulting kernel image will not boot with this error message: kernel BUG at fs/sysfs/group.c:65! Use GENERIC_CPU_DEVICES=y if SMP=n to get the missing cpu device. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Hendrik Brueckner 提交于
Add a perf PMU to access the CPU-measurement counter facility CPUM CF. CPUM CF provides multiple counter sets for measuring generic, problem-state, and crypto activaties. Also an extended counter set for the IBM System z10 and IBM z196 mainframes is available. Counters from the basic and problem-state counter set are mapped to generic perf hardware events. Other counters are accessible through raw events. For a list of available counter sets and counters, see: - The Load-Program-Parameter and the CPU-Measurement Facilities (SA23-2260) - The CPU-Measurement Facility Extended Counters Definition for z10 and z196 (SA23-2261) Reviewed-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Glauber 提交于
Prepare the measurement facility which is currently only used by oprofile for multiple users. To achieve that the measurement alert interrupt control bit needs to be protected. The measurement alert definitions are moved to a header file and an interrupt mask is added so that users can discard interrupts if they are for a different measurement subsystem. Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Ben Hutchings 提交于
This function is defined for use in exec, not in modules. No other architecture exports its implementation. Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 21 3月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 16 3月, 2012 1 次提交
-
-
由 Chris Metcalf 提交于
When using the "compat" APIs, architectures will generally want to be able to make direct syscalls to msgsnd(), shmctl(), etc., and in the kernel we would want them to be handled directly by compat_sys_xxx() functions, as is true for other compat syscalls. However, for historical reasons, several of the existing compat IPC syscalls do not do this. semctl() expects a pointer to the fourth argument, instead of the fourth argument itself. msgsnd(), msgrcv() and shmat() expect arguments in different order. This change adds an ARCH_WANT_OLD_COMPAT_IPC config option that can be set to preserve this behavior for ports that use it (x86, sparc, powerpc, s390, and mips). No actual semantics are changed for those architectures, and there is only a minimal amount of code refactoring in ipc/compat.c. Newer architectures like tile (and perhaps future architectures such as arm64 and unicore64) should not select this option, and thus can avoid having any IPC-specific code at all in their architecture-specific compat layer. In the same vein, if this option is not selected, IPC_64 mode is assumed, since that's what the <asm-generic> headers expect. The workaround code in "tile" for msgsnd() and msgrcv() is removed with this change; it also fixes the bug that shmat() and semctl() were not being properly handled. Reviewed-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 13 3月, 2012 2 次提交
-
-
由 Michael Holzheu 提交于
Currently pcpu_devices->panic_stack is passed to pcpu_delegate() in smp_call_ipl_cpu(). This is wrong because pcpu_delegate() expects the bottom (high address) of the stack and pcpu_devices->panic_stack points to the top (low address). We now pass the bottom of the stack which is pcpu_devices->panic_stack + PAGE_SIZE. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Peter Zijlstra 提交于
Stepan found: CPU0 CPUn _cpu_up() __cpu_up() boostrap() notify_cpu_starting() set_cpu_online() while (!cpu_active()) cpu_relax() <PREEMPT-out> smp_call_function(.wait=1) /* we find cpu_online() is true */ arch_send_call_function_ipi_mask() /* wait-forever-more */ <PREEMPT-in> local_irq_enable() cpu_notify(CPU_ONLINE) sched_cpu_active() set_cpu_active() Now the purpose of cpu_active is mostly with bringing down a cpu, where we mark it !active to avoid the load-balancer from moving tasks to it while we tear down the cpu. This is required because we only update the sched_domain tree after we brought the cpu-down. And this is needed so that some tasks can still run while we bring it down, we just don't want new tasks to appear. On cpu-up however the sched_domain tree doesn't yet include the new cpu, so its invisible to the load-balancer, regardless of the active state. So instead of setting the active state after we boot the new cpu (and consequently having to wait for it before enabling interrupts) set the cpu active before we set it online and avoid the whole mess. Reported-by: NStepan Moskovchenko <stepanm@codeaurora.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NThomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1323965362.18942.71.camel@twinsSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 11 3月, 2012 12 次提交
-
-
由 Michael Holzheu 提交于
Because the vmcore_info pointer is not 8 byte aligned it never should not be accessed directly. The reason is that the compiler assumes that 64 bit pointer are always double word aligned. To ensure save access, the vmcore_info type in struct lowcore is changed from u64 to an u8[8] array and a comment is added. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The first line of a stack dump has a wrong (no) indentation. Just fix this after more than 10 years. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
In order to allow kdump based stand-alone dump, some information has to be passed from the old kernel to the new dump kernel. This is done via a the struct "os_info" that contains the following fields: * crashkernel base and size * reipl block * vmcoreinfo * init function A pointer to os_info is stored at a well known storage location and the whole structure as well as all fields are secured with checksums. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Matt Fleming 提交于
Use the new helper function introduced in commit 5e6292c0 ("signal: add block_sigmask() for adding sigmask to current->blocked") which centralises the code for updating current->blocked after successfully delivering a signal and reduces the amount of duplicate code across architectures. In the past some architectures got this code wrong, so using this helper function should stop that from happening again. Cc: Oleg Nesterov <oleg@redhat.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Currently the following mechanisms are available to move active Linux on System z instances between machines: * z/VM 6.2 SSI (Single System Image) * Suspend/resume For moving Linux instances in this patch the term LGR (Linux Guest Relocation) is used. Because such an operation is critical, it should be detectable from Linux. With this patch for both, a live system and a kernel dump, the information about LGRs is accessible. To identify a guest, stsi and stfle data is used. A new function lgr_info_log() compares the current data (lgr_info_cur) with the last recorded one (lgr_info_last). In case the two data sets differ, lgr_info_cur is logged to the "lgr" s390dbf. The following trigger points call lgr_info_log(): * panic * die * kdump * LGR timer * PSW restart * QDIO recovery * resume This patch also changes the s390dbf hex_ascii view. Now only printable ASCII characters are shown. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The external interrupt handlers have a parameter called ext_int_code. Besides the name this paramter does not only contain the ext_int_code but in addition also the "cpu address" (POP) which caused the external interrupt. To make the code a bit more obvious pass a struct instead so the called function can easily distinguish between external interrupt code and cpu address. The cpu address field however is named "subcode" since some external interrupt sources do not pass a cpu address but a different parameter (or none at all). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Set __ARCH_IRQ_EXIT_IRQS_DISABLED in order to optimize irq_exit() a bit, since we call __do_softirq() instead of do_softirq(). This saves several needless checks, pointless interrupt disabling and an extra branch. If do_softirq() gets called from process context we still switch to the async stack. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Use the new copy_to_absolute_zero() function instead of manual "stura" and "sturg" to make the code shorter and more readable. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Whenever the cpu loads an enabled wait PSW it will appear as idle to the underlying host system. The code in default_idle calls vtime_stop_cpu which does the necessary voodoo to get the cpu time accounting right. The udelay code just loads an enabled wait PSW. To correct this rework the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts to entry[64].S, vtime_stop_cpu can now be called from anywhere and vtime_start_cpu is gone. The correction of the cpu time during wakeup from an enabled wait PSW is done with a critical section in entry[64].S. As vtime_start_cpu is gone, s390_idle_check can be removed as well. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Define struct pcpu and merge some of the NR_CPUS arrays into it, including __cpu_logical_map, current_set and smp_cpu_state. Split smp related functions to those operating on physical cpus and the functions operating on a logical cpu number. Make the functions for physical cpus use a pointer to a struct pcpu. This hides the knowledge about cpu addresses in smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header. The PSW restart mechanism is used to start secondary cpus, calling a function on an online cpu, calling a function on the ipl cpu, and for the nmi signal. Replace the different assembler functions with a single function restart_int_handler. The new entry point calls a function whose pointer is stored in the lowcore of the target cpu and it can wait for the source cpu to stop. This covers all existing use cases. Overall the code is now simpler and there are ~380 lines less code. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The 16 bit value at the lowcore location with offset 0x84 is the cpu address that is associated with an external interrupt. Rename the field from cpu_addr to ext_cpu_addr to make that clear. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
With gcc 4.6.0 we get a false compile warning: arch/s390/kernel/setup.c: In function 'setup_arch': arch/s390/kernel/setup.c:767:3: warning: 'msg' may be used uninitialized in this function [-Wuninitialized] arch/s390/kernel/setup.c:753:8: note: 'msg' was declared here This patch makes gcc quiet. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 3月, 2012 7 次提交
-
-
由 Takuya Yoshikawa 提交于
Some members of kvm_memory_slot are not used by every architecture. This patch is the first step to make this difference clear by introducing kvm_memory_slot::arch; lpage_info is moved into it. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
There are several cases were we need the control registers for userspace. Lets also provide those in kvm_run. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jens Freimann 提交于
When we do a stop and store status we need to pass ACTION_STOP_ON_STOP flag to __sigp_stop(). Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jens Freimann 提交于
In __inject_sigp_stop() do nothing when the CPU is already in stopped state. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jens Freimann 提交于
On reboot the guest sends in smp_send_stop() a sigp stop to all CPUs except for current CPU. Then the guest switches to the IPL cpu by sending a restart to the IPL CPU, followed by a sigp stop to the current cpu. Since restart is handled by userspace it's possible that the restart is delivered before the old stop. This means that the IPL CPU isn't restarted and we have no running CPUs. So let's make sure that there is no stop action pending when we do the restart. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jens Freimann 提交于
In handle_stop() handle the stop bit before doing the store status as described for "Stop and Store Status" in the Principles of Operation. We have to give up the local_int.lock before calling kvm store status since it calls gmap_fault() which might sleep. Since local_int.lock only protects local_int.* and not guest memory we can give up the lock. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
commit 7eef87dc (KVM: s390: fix register setting) added a load of the floating point control register to the KVM_SET_FPU path. Lets make sure that the fpc is valid. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 05 3月, 2012 6 次提交
-
-
由 Christian Borntraeger 提交于
This patch adds the access registers to the kvm_run structure. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
This patch adds the general purpose registers to the kvm_run structure. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
Add the prefix register to the synced register field in kvm_run. While we need the prefix register most of the time read-only, this patch also adds handling for guest dirtying of the prefix register. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
On some cpus the overhead for virtualization instructions is in the same range as a system call. Having to call multiple ioctls to get set registers will make certain userspace handled exits more expensive than necessary. Lets provide a section in kvm_run that works as a shared save area for guest registers. We also provide two 64bit flags fields (architecture specific), that will specify 1. which parts of these fields are valid. 2. which registers were modified by userspace Each bit for these flag fields will define a group of registers (like general purpose) or a single register. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
There are several places in the kvm module, which set the prefix register. Since we need to flush the cpu, lets combine this operation into a helper function. This helper will also explicitely mask out the unused bits. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Carsten Otte 提交于
This patch fixes the return code of kvm_arch_vcpu_ioctl in case of an unkown ioctl number. Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-