- 12 2月, 2015 2 次提交
-
-
由 Heiko Carstens 提交于
There is no reason to initialize the topology cpu masks already while setup_arch() is being called. It is sufficient to initialize the masks before the scheduler becomes SMP aware. Therefore a pre-SMP initcall aka early_initcall is suffucient. This also allows to convert the cpu_topology array into a per cpu variable with a later patch. Without this patch this wouldn't be possible since the per cpu memory areas are not allocated while setup_arch is executed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Kirill A. Shutemov 提交于
LKP has triggered a compiler warning after my recent patch "mm: account pmd page tables to the process": mm/mmap.c: In function 'exit_mmap': >> mm/mmap.c:2857:2: warning: right shift count >= width of type [enabled by default] The code: > 2857 WARN_ON(mm_nr_pmds(mm) > 2858 round_up(FIRST_USER_ADDRESS, PUD_SIZE) >> PUD_SHIFT); In this, on tile, we have FIRST_USER_ADDRESS defined as 0. round_up() has the same type -- int. PUD_SHIFT. I think the best way to fix it is to define FIRST_USER_ADDRESS as unsigned long. On every arch for consistency. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 2月, 2015 1 次提交
-
-
由 Joonsoo Kim 提交于
Kim Phillips reported following build failure. LD init/built-in.o mm/built-in.o: In function `free_pages_prepare': mm/page_alloc.c:770: undefined reference to `.kernel_map_pages' mm/built-in.o: In function `prep_new_page': mm/page_alloc.c:933: undefined reference to `.kernel_map_pages' mm/built-in.o: In function `map_pages': mm/compaction.c:61: undefined reference to `.kernel_map_pages' make: *** [vmlinux] Error 1 Reason for this problem is that commit 031bc574 ("mm/debug-pagealloc: make debug-pagealloc boottime configurable") forgot to remove the old declaration of kernel_map_pages() for some architectures. This patch removes them to fix build failure. Reported-by: NKim Phillips <kim.phillips@freescale.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: David Miller <davem@davemloft.net> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 1月, 2015 3 次提交
-
-
由 Heiko Carstens 提交于
Use a brcl 0,2 instruction for jump label nops during compile time, so we don't mix up the different nops during mcount/hotpatch call site detection. The initial jump label code instruction replacement will exchange these instructions with either a branch or a brcl 0,0 instruction. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Make use of gcc's hotpatch support to generate better code for ftrace function tracing. The generated code now contains only a six byte nop in each function prologue instead of a 24 byte code block which will be runtime patched to support function tracing. With the new code generation the runtime overhead for supporting function tracing is close to zero, while the original code did show a significant performance impact. Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Christian Borntraeger reported that the now missing diag 44 calls (voluntary time slice end) does cause a performance regression for stop_machine() calls if a machine has more virtual cpus than the host has physical cpus. This patch mainly reverts 57f2ffe1 ("s390: remove diag 44 calls from cpu_relax()") with the exception that we still do not issue diag 44 calls if running with smt enabled. Due to group scheduling algorithms when running in LPAR this would lead to significant latencies. However, when running in LPAR we do not have more virtual than physical cpus. Reported-and-tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Martin Schwidefsky 提交于
Add the compare-and-delay instruction to the spin-lock and rw-lock retry loops. A CPU executing the compare-and-delay instruction stops until the lock value has changed. This is done to make the locking code for contended locks to behave better in regard to the multi- hreading facility. A thread of a core executing a compare-and-delay will allow the other threads of a core to get a larger share of the core resources. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 1月, 2015 2 次提交
-
-
由 Martin Schwidefsky 提交于
The multi-threading facility is introduced with the z13 processor family. This patch adds code to detect the multi-threading facility. With the facility enabled each core will surface multiple hardware threads to the system. Each hardware threads looks like a normal CPU to the operating system with all its registers and properties. The SCLP interface reports the SMT topology indirectly via the maximum thread id. Each reported CPU in the result of a read-scp-information is a core representing a number of hardware threads. To reflect the reduced CPU capacity if two hardware threads run on a single core the MT utilization counter set is used to normalize the raw cputime obtained by the CPU timer deltas. This scaled cputime is reported via the taskstats interface. The normal /proc/stat numbers are based on the raw cputime and are not affected by the normalization. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Avoid cache aliasing on z13 by aligning shared objects to multiples of 512K. The virtual addresses of a page from a shared file needs to have identical bits in the range 2^12 to 2^18. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 1月, 2015 1 次提交
-
-
由 Chen Gang 提交于
_sclp_print_early() has return value: at present, return 0 for OK, 1 for failure. It returns '%r2', so use 'long' as return value (upper caller can check '%r2' directly). The related warning: CC arch/s390/boot/compressed/misc.o arch/s390/boot/compressed/misc.c:66:8: warning: type defaults to 'int' in declaration of '_sclp_print_early' [-Wimplicit-int] extern _sclp_print_early(const char *); ^ At present, _sclp_print_early() is only used by puts(), so can still remain its declaration in 'misc.c' file. [heiko.carstens@de.ibm.com]: move declaration to sclp.h header file Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 1月, 2015 1 次提交
-
-
由 Chen Gang 提交于
For C language, it treats array parameter as a pointer, so sizeof for an array parameter is equal to sizeof for a pointer, which causes compiler warning (with allmodconfig by gcc 5): ./arch/s390/include/asm/timex.h: In function 'get_tod_clock_ext': ./arch/s390/include/asm/timex.h:76:32: warning: 'sizeof' on array function parameter 'clk' will return size of 'char *' [-Wsizeof-array-argument] typedef struct { char _[sizeof(clk)]; } addrtype; ^ Can use macro CLOCK_STORE_SIZE instead of all related hard code numbers, which also can avoid this warning. And also add a tab to CLOCK_TICK_RATE definition to match coding styles. [heiko.carstens@de.ibm.com]: Chen's patch actually fixes a bug within the get_tod_clock_ext() inline assembly where we incorrectly tell the compiler that only 8 bytes of memory get changed instead of 16 bytes. This would allow gcc to generate incorrect code. Right now this doesn't seem to be the case. Also slightly changed the patch a bit. - renamed CLOCK_STORE_SIZE to STORE_CLOCK_EXT_SIZE - changed get_tod_clock_ext() to receive a char pointer parameter Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 18 12月, 2014 1 次提交
-
-
由 Christian Borntraeger 提交于
On some models, stnsm 255 might be slightly faster than stosm 0. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 12 12月, 2014 2 次提交
-
-
由 Alexander Duyck 提交于
There are a number of situations where the mandatory barriers rmb() and wmb() are used to order memory/memory operations in the device drivers and those barriers are much heavier than they actually need to be. For example in the case of PowerPC wmb() calls the heavy-weight sync instruction when for coherent memory operations all that is really needed is an lsync or eieio instruction. This commit adds a coherent only version of the mandatory memory barriers rmb() and wmb(). In most cases this should result in the barrier being the same as the SMP barriers for the SMP case, however in some cases we use a barrier that is somewhere in between rmb() and smp_rmb(). For example on ARM the rmb barriers break down as follows: Barrier Call Explanation --------- -------- ---------------------------------- rmb() dsb() Data synchronization barrier - system dma_rmb() dmb(osh) data memory barrier - outer sharable smp_rmb() dmb(ish) data memory barrier - inner sharable These new barriers are not as safe as the standard rmb() and wmb(). Specifically they do not guarantee ordering between coherent and incoherent memories. The primary use case for these would be to enforce ordering of reads and writes when accessing coherent memory that is shared between the CPU and a device. It may also be noted that there is no dma_mb(). Most architectures don't provide a good mechanism for performing a coherent only full barrier without resorting to the same mechanism used in mb(). As such there isn't much to be gained in trying to define such a function. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: David Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This patch is meant to cleanup the handling of read_barrier_depends and smp_read_barrier_depends. In multiple spots in the kernel headers read_barrier_depends is defined as "do {} while (0)", however we then go into the SMP vs non-SMP sections and have the SMP version reference read_barrier_depends, and the non-SMP define it as yet another empty do/while. With this commit I went through and cleaned out the duplicate definitions and reduced the number of definitions down to 2 per header. In addition I moved the 50 line comments for the macro from the x86 and mips headers that defined it as an empty do/while to those that were actually defining the macro, alpha and blackfin. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 1 次提交
-
-
由 Daniel Borkmann 提交于
As there are now no remaining users of arch_fast_hash(), lets kill it entirely. This basically reverts commit 71ae8aac ("lib: introduce arch optimized hash library") and follow-up work, that is f.e., commit 23721754 ("lib: hash: follow-up fixups for arch hash"), commit e3fec2f7 ("lib: Add missing arch generic-y entries for asm-generic/hash.h") and last but not least commit 6a02652d ("perf tools: Fix include for non x86 architectures"). Cc: Francesco Fusco <fusco@ntop.org> Cc: Thomas Graf <tgraf@suug.ch> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 12月, 2014 5 次提交
-
-
由 Martin Schwidefsky 提交于
git commit 8461b63c "s390: translate cputime magic constants to macros" introduce a built error for 31-bit: kernel/built-in.o: In function `posix_cpu_timer_set': posix-cpu-timers.c:(.text+0x2a8cc): undefined reference to `__udivdi3' The original code is actually broken for 31-bit and has been corrected by the above commit by forcing the compiler to use 64-bit arithmetic through the CPUTIME_PER_USEC define. To fix the compile error replace the 64-bit division with a call to __div(). Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The pmd_free_tlb function fails to call pgtable_pmd_page_dtor. Without the call the ptlock for the pmd tables will not be freed. Add the missing call. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Frederic Weisbecker 提交于
Make the code more self-explanatory by naming magic constants. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Frederic Weisbecker 提交于
s390 uses open coded seqcount to synchronize idle time accounting. Lets consolidate it with the standard API. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
debug_sprintf_event/exception are called even for debug events with a disabling debug level. All other functions already do the check in a wrapper function. Lets do the same here. Due to the var_args the compiler rejects to make this function inline. So let's wrap this via a macro. This patch saves around 80 ns on my z196 for a KVM round trip (we have two debug statements for entry and exit) when KVM is build as a module. The savings for built-in drivers is smaller as we then avoid the PLT overhead for a function call. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 11月, 2014 6 次提交
-
-
由 Jens Freimann 提交于
This patch adapts handling of local interrupts to be more compliant with the z/Architecture Principles of Operation and introduces a data structure which allows more efficient handling of interrupts. * get rid of li->active flag, use bitmap instead * Keep interrupts in a bitmap instead of a list * Deliver interrupts in the order of their priority as defined in the PoP * Use a second bitmap for sigp emergency requests, as a CPU can have one request pending from every other CPU in the system. Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jens Freimann 提交于
Adds a bitmap to the vcpu structure which is used to keep track of local pending interrupts. Also add enum with all interrupt types sorted in order of priority (highest to lowest) Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com> Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jason J. Herne 提交于
Define get_guest_storage_key which can be used to get the value of a guest storage key. This compliments the functionality provided by the helper function set_guest_storage_key. Both functions are needed for live migration of s390 guests that use storage keys. Signed-off-by: NJason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Thomas Huth 提交于
A couple of our interception handlers rewind the PSW to the beginning of the instruction to run the intercepted instruction again during the next SIE entry. This normally works fine, but there is also the possibility that the instruction did not get run directly but via an EXECUTE instruction. In this case, the PSW does not point to the instruction that caused the interception, but to the EXECUTE instruction! So we've got to rewind the PSW to the beginning of the EXECUTE instruction instead. This is now accomplished with a new helper function kvm_s390_rewind_psw(). Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Heiko Carstens 提交于
Simplify cpu_relax() to a simple barrier(). Performance wise this doesn't seem to make any big difference anymore, since nearly all lock variants have directed yield semantics in the meantime. Also this makes s390 behave like all other architectures. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The common code ftrace_return_address(n), which is just a wrapper for __builtin_return_address(n), will only work for n > 0 if CONFIG_FRAME_POINTER is set to 'y'. Otherwise it will return 0. Since on s390 we will never have that config option set to 'y' ftrace_return_address() won't work at all for n > 0. Luckily we always compile the kernel with -mkernel-backchain which in turn means that __builtin_return_address(n) will always work. So let ftrace_return_address(n) map to __builtin_return_address(n). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 19 11月, 2014 3 次提交
-
-
由 Dave Hansen 提交于
The x86 MPX patch set calls arch_unmap() and arch_bprm_mm_init() from fs/exec.c, so we need at least a stub for them in all architectures. They are only called under an #ifdef for CONFIG_MMU=y, so we can at least restict this to architectures with MMU support. blackfin/c6x have no MMU support, so do not call arch_unmap(). They also do not include mm_hooks.h or mmu_context.h at all and do not need to be touched. s390, um and unicore32 do not use asm-generic/mm_hooks.h, so got their own arch_unmap() versions. (I also moved um's arch_dup_mmap() to be closer to the other mm_hooks.h functions). xtensa only includes mm_hooks when MMU=y, which should be fine since arch_unmap() is called only from MMU=y code. For the rest, we use the stub copies of these functions in asm-generic/mm_hook.h. I cross compiled defconfigs for cris (to check NOMMU) and s390 to make sure that this works. I also checked a 64-bit build of UML and all my normal x86 builds including PARAVIRT on and off. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: Dave Hansen <dave@sr71.net> Cc: linux-arch@vger.kernel.org Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20141118182350.8B4AA2C2@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Sebastian Ott 提交于
Irq 0 is currently unused on s390. Since there is no reason to do this start counting at the beginning and gain an additional irq. Also correctly report the smallest usable irq number for dynamic allocation. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Frank Blaschka 提交于
add ioport_map stubs to make vfio build on s390. Signed-off-by: NFrank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 11月, 2014 1 次提交
-
-
由 Thierry Reding 提交于
The xlate_dev_{kmem,mem}_ptr() functions take either a physical address or a kernel virtual address, so data types should be phys_addr_t and void *. They both return a kernel virtual address which is only ever used in calls to copy_{from,to}_user(), so make variables that store it void * rather than char * for consistency. Also only define a weak unxlate_dev_mem_ptr() function if architectures haven't overridden them in the asm/io.h header file. Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 03 11月, 2014 3 次提交
-
-
由 Martin Schwidefsky 提交于
Fix the following warnings from the sparse code checker: arch/s390/include/asm/pci_io.h:165:49: warning: cast removes address space of expression arch/s390/pci/pci.c:476:44: warning: cast removes address space of expression arch/s390/pci/pci.c:491:36: warning: incorrect type in argument 2 (different address spaces) arch/s390/pci/pci.c:491:36: expected void [noderef] <asn:2>*addr arch/s390/pci/pci.c:491:36: got void *<noident> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
s390s arch_setup_msi_irqs function ensures that we don't return with more irqs than the PCI architecture allows and that a single PCI function doesn't consume more irqs than the kernel is configured for. At least the last check doesn't help much and should take the sum of all irqs into account. Since that's already done by irq_alloc_desc we can remove this check. As for the first check we should use the value provided by the firmware which can be less than what the PCI architecture allows. Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The kernel build for s390 fails for gcc compilers with version 3.x, set the minimum required version of gcc to version 4.3. As the atomic builtins are available with all gcc 4.x compilers, use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap functions to replace the complex macro and inline assembler magic in include/asm/cmpxchg.h. The compiler can just-do-it and generates better code with the builtins. While we are at it use __sync_bool_compare_and_swap for the _raw_compare_and_swap function in the spinlock code as well. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 10月, 2014 3 次提交
-
-
由 David Hildenbrand 提交于
This patch introduces instruction counters for all known sigp orders and also a separate one for unknown orders that are passed to user space. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 David Hildenbrand 提交于
This patch introduces in preparation for further code changes separate handler functions for: - SIGP (RE)START - will not be allowed to terminate pending orders - SIGP (INITIAL) CPU RESET - will be allowed to terminate certain pending orders - unknown sigp orders All sigp orders that require user space intervention are logged. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Thomas Huth 提交于
The ipte-locking should be done for each VM seperately, not globally. This way we avoid possible congestions when the simple ipte-lock is used and multiple VMs are running. Suggested-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 27 10月, 2014 3 次提交
-
-
由 Martin Schwidefsky 提交于
Analog to ptep_get_and_clear_full define a variant of the pmpd_get_and_clear primitive which gets the full hint from the mmu_gather struct. This allows s390 to avoid a costly instruction when destroying an address space. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
If the function tracer is enabled, allow to set kprobes on the first instruction of a function (which is the function trace caller): If no kprobe is set handling of enabling and disabling function tracing of a function simply patches the first instruction. Either it is a nop (right now it's an unconditional branch, which skips the mcount block), or it's a branch to the ftrace_caller() function. If a kprobe is being placed on a function tracer calling instruction we encode if we actually have a nop or branch in the remaining bytes after the breakpoint instruction (illegal opcode). This is possible, since the size of the instruction used for the nop and branch is six bytes, while the size of the breakpoint is only two bytes. Therefore the first two bytes contain the illegal opcode and the last four bytes contain either "0" for nop or "1" for branch. The kprobes code will then execute/simulate the correct instruction. Instruction patching for kprobes and function tracer is always done with stop_machine(). Therefore we don't have any races where an instruction is patched concurrently on a different cpu. Besides that also the program check handler which executes the function trace caller instruction won't be executed concurrently to any stop_machine() execution. This allows to keep full fault based kprobes handling which generates correct pt_regs contents automatically. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
When storage keys are enabled unmerge already merged pages and prevent new pages from being merged. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-