- 19 11月, 2014 4 次提交
-
-
由 Alexey Ishchuk 提交于
Add the new __NR_s390_pci_mmio_write and __NR_s390_pci_mmio_read system calls to allow user space applications to access device PCI I/O memory pages on s390x platform. [ Martin Schwidefsky: some code beautification ] Signed-off-by: NAlexey Ishchuk <aishchuk@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The floating point registers of a process that uses vector instruction are not store into task->thread.fp_regs anymore but in the upper halves of the first 16 vector registers. The ptrace interface for the peeks and pokes to the user area fails to take this into account. Fix __peek_user[_compat] and __poke_user[_compat] to use the vector array for the floating pointer register if the process has one. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Irq 0 is currently unused on s390. Since there is no reason to do this start counting at the beginning and gain an additional irq. Also correctly report the smallest usable irq number for dynamic allocation. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Frank Blaschka 提交于
add ioport_map stubs to make vfio build on s390. Signed-off-by: NFrank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 11月, 2014 6 次提交
-
-
由 Hendrik Brueckner 提交于
The git commit c719f560 "perf: Fix and clean up initialization of pmu::event_idx" removed the PMU event index callback for all architectures but x86, remove the initialization of the event index as well. Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Fix the following warnings from the sparse code checker: arch/s390/kernel/signal.c:374:38: warning: cast removes address space of expression arch/s390/kernel/signal.c:374:65: warning: incorrect type in initializer (different address spaces) arch/s390/kernel/signal.c:374:65: expected unsigned short [noderef] [usertype] <asn:1>*svc arch/s390/kernel/signal.c:374:65: got void * arch/s390/kernel/compat_signal.c:437:38: warning: cast removes address space of expression arch/s390/kernel/compat_signal.c:437:65: warning: incorrect type in initializer (different address spaces) arch/s390/kernel/compat_signal.c:437:65: expected unsigned short [noderef] [usertype] <asn:1>*svc arch/s390/kernel/compat_signal.c:437:65: got void * Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The page table lock is acquired with a call to get_locked_pte, replace the plain spin_unlock with the correct unlock function pte_unmap_unlock. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Fix the following warnings from the sparse code checker: arch/s390/include/asm/pci_io.h:165:49: warning: cast removes address space of expression arch/s390/pci/pci.c:476:44: warning: cast removes address space of expression arch/s390/pci/pci.c:491:36: warning: incorrect type in argument 2 (different address spaces) arch/s390/pci/pci.c:491:36: expected void [noderef] <asn:2>*addr arch/s390/pci/pci.c:491:36: got void *<noident> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
s390s arch_setup_msi_irqs function ensures that we don't return with more irqs than the PCI architecture allows and that a single PCI function doesn't consume more irqs than the kernel is configured for. At least the last check doesn't help much and should take the sum of all irqs into account. Since that's already done by irq_alloc_desc we can remove this check. As for the first check we should use the value provided by the firmware which can be less than what the PCI architecture allows. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The kernel build for s390 fails for gcc compilers with version 3.x, set the minimum required version of gcc to version 4.3. As the atomic builtins are available with all gcc 4.x compilers, use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap functions to replace the complex macro and inline assembler magic in include/asm/cmpxchg.h. The compiler can just-do-it and generates better code with the builtins. While we are at it use __sync_bool_compare_and_swap for the _raw_compare_and_swap function in the spinlock code as well. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 10月, 2014 10 次提交
-
-
由 Martin Schwidefsky 提交于
Analog to ptep_get_and_clear_full define a variant of the pmpd_get_and_clear primitive which gets the full hint from the mmu_gather struct. This allows s390 to avoid a costly instruction when destroying an address space. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
After fixup_user_fault does not fail we have a writeable pte. That pte might transform but it should not vanish. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use NOKPROBE_SYMBOL() instead of __kprobes annotation. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
If the function tracer is enabled, allow to set kprobes on the first instruction of a function (which is the function trace caller): If no kprobe is set handling of enabling and disabling function tracing of a function simply patches the first instruction. Either it is a nop (right now it's an unconditional branch, which skips the mcount block), or it's a branch to the ftrace_caller() function. If a kprobe is being placed on a function tracer calling instruction we encode if we actually have a nop or branch in the remaining bytes after the breakpoint instruction (illegal opcode). This is possible, since the size of the instruction used for the nop and branch is six bytes, while the size of the breakpoint is only two bytes. Therefore the first two bytes contain the illegal opcode and the last four bytes contain either "0" for nop or "1" for branch. The kprobes code will then execute/simulate the correct instruction. Instruction patching for kprobes and function tracer is always done with stop_machine(). Therefore we don't have any races where an instruction is patched concurrently on a different cpu. Besides that also the program check handler which executes the function trace caller instruction won't be executed concurrently to any stop_machine() execution. This allows to keep full fault based kprobes handling which generates correct pt_regs contents automatically. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
When storage keys are enabled unmerge already merged pages and prevent new pages from being merged. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
As soon as storage keys are enabled we need to stop working on zero page mappings to prevent inconsistencies between storage keys and pgste. Otherwise following data corruption could happen: 1) guest enables storage key 2) guest sets storage key for not mapped page X -> change goes to PGSTE 3) guest reads from page X -> as X was not dirty before, the page will be zero page backed, storage key from PGSTE for X will go to storage key for zero page 4) guest sets storage key for not mapped page Y (same logic as above 5) guest reads from page Y -> as Y was not dirty before, the page will be zero page backed, storage key from PGSTE for Y will got to storage key for zero page overwriting storage key for X While holding the mmap sem, we are safe against changes on entries we already fixed, as every fault would need to take the mmap_sem (read). Other vCPUs executing storage key instructions will get a one time interception and be serialized also with mmap_sem. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
Replace the s390 specific page table walker for the pgste updates with a call to the common code walk_page_range function. There are now two pte modification functions, one for the reset of the CMMA state and another one for the initialization of the storage keys. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The kernel provided vdso functions do not get a stack frame from the calling function and therefore may not change the stack contents, unless they allocate space on their own. This problem was exposed with 070b7be6 "s390/vdso: replace stck with stcke" which writes 16 bytes instead of 8 bytes into the stack frame. These additional 8 bytes however were indeed used by the caller (glibc) to save data and therefore this data was corrupted by the vdso code. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The last high frequency call site of the STCK instruction is do_account_vtime. Replace it with the faster STCKF instruction. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 17 10月, 2014 2 次提交
-
-
由 Jan Willeke 提交于
If kprobes is disabled uprobes will not compile. Fix this by including the correct header files. Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 10月, 2014 1 次提交
-
-
由 Dominik Dingel 提交于
pte_unmap works on page table entry pointers, derefencing should be avoided. As on s390 pte_unmap is a NOP, this is more a cleanup if we want to supply later such function. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 10月, 2014 1 次提交
-
-
由 Geert Uytterhoeven 提交于
The different architectures used their own (and different) declarations: extern __visible const void __nosave_begin, __nosave_end; extern const void __nosave_begin, __nosave_end; extern long __nosave_begin, __nosave_end; Consolidate them using the first variant in <asm/sections.h>. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 10月, 2014 8 次提交
-
-
由 Heiko Carstens 提交于
We can simply patch the mask field within the branch relative on condition instruction at the beginning of the ftrace_graph_caller code block. This makes the logic even simpler and we get rid of the displacement calculation. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
31 bit and 64 bit diverge more and more and it is rather painful to keep both parts running. To make things simpler just remove the 31 bit support which nobody uses anyway. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
With this patch for kdump the s390 vector registers are stored into the prepared save areas in the old kernel and into the REGSET_VX_LOW and REGSET_VX_HIGH ELF notes for /proc/vmcore in the new kernel. The NT_S390_VXRS_LOW note contains the lower halves of the first 16 vector registers 0-15. The higher halves are stored in the floating point register ELF note. The NT_S390_VXRS_HIGH contains the full vector registers 16-31. The kernel provides a save area for storing vector register in case of machine checks. A pointer to this save are is stored in the CPU lowcore at offset 0x11b0. This save area is also used to save the registers for kdump. In case of a dumped crashed kdump those areas are used to extract the registers of the production system. The vector registers for remote CPUs are stored using the "store additional status at address" SIGP. For the dump CPU the vector registers are stored with the VSTM instruction. With this patch also zfcpdump stores the vector registers. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Add the instruction introduced with the vector extension to the in-kernel disassembler. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The vector extension introduces 32 128-bit vector registers and a set of instruction to operate on the vector registers. The kernel can control the use of vector registers for the problem state program with a bit in control register 0. Once enabled for a process the kernel needs to retain the content of the vector registers on context switch. The signal frame is extended to include the vector registers. Two new register sets NT_S390_VXRS_LOW and NT_S390_VXRS_HIGH are added to the regset interface for the debugger and core dumps. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Move the C functions and definitions related to the idle state handling to arch/s390/include/asm/idle.h and arch/s390/kernel/idle.c. The function s390_get_idle_time is renamed to arch_cpu_idle_time and vtime_stop_cpu to enabled_wait. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Move the nohz_delay bit from the s390_idle data structure to the per-cpu flags. Clear the nohz delay flag in __cpu_disable and remove the cpu hotplug notifier that used to do this. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The sysfs attributes /sys/devices/system/cpu/cpu0/idle_count and /sys/devices/system/cpu/cpu0/idle_time_us are reset to zero every time a CPU is set online. The idle and iowait fields in /proc/stat corresponding to idle_time_us are not reset. To make things consistent do not reset the data for the sys attributes. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 10月, 2014 1 次提交
-
-
由 Rik van Riel 提交于
On 32 bit systems cmpxchg cannot handle 64 bit values, so some additional magic is required to allow a 32 bit system with CONFIG_VIRT_CPU_ACCOUNTING_GEN=y enabled to build. Make sure the correct cmpxchg function is used when doing an atomic swap of a cputime_t. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRik van Riel <riel@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: umgwanakikbuti@gmail.com Cc: fweisbec@gmail.com Cc: srao@redhat.com Cc: lwoodman@redhat.com Cc: atheurer@redhat.com Cc: oleg@redhat.com Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: linux390@de.ibm.com Cc: linux-arch@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Link: http://lkml.kernel.org/r/20140930155947.070cdb1f@annuminas.surriel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 01 10月, 2014 2 次提交
-
-
由 David Hildenbrand 提交于
This patch introduces the halt_wakeup counter used by common code and uses it to count vcpu wakeups done in s390 arch specific code. Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Christian Borntraeger 提交于
There is nothing to do for KVM to support TOD-CLOCK steering. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
-
- 30 9月, 2014 1 次提交
-
-
由 Heiko Carstens 提交于
Invalidate several pte entries at once if the ipte range facility is available. Currently this works only for DEBUG_PAGE_ALLOC where several up to 2 ^ MAX_ORDER may be invalidated at once. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 9月, 2014 2 次提交
-
-
由 Martin Schwidefsky 提交于
Fix calculation to decide if a 4-level kernel page table is required. Git commit c972cc60 "s390/vmalloc: have separate modules area" added the separate module area which reduces the size of the vmalloc area but fails to take it into account for the 3 vs 4 level page table decision. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The call to topology_init is too late for the set_sched_topology call. The initial scheduling domain structure has already been established with default topology array. Use the smp_cpus_done() call to get the s390 specific topology array registered early enough. Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 9月, 2014 2 次提交
-
-
由 Jan Willeke 提交于
Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jan Willeke 提交于
This patch moves common functions from kprobes.c to probes.c. Thus its possible for uprobes to use them without enabling kprobes. Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-