- 24 3月, 2019 2 次提交
-
-
由 Martin Schwidefsky 提交于
commit 86a86804e4f18fc3880541b3d5a07f4df0fe29cb upstream. The fix to make WARN work in the early boot code created a problem on older machines without EDAT-1. The setup_lowcore_dat_on function uses the pointer from lowcore_ptr[0] to set the DAT bit in the new PSWs. That does not work if the kernel page table is set up with 4K pages as the prefix address maps to absolute zero. To make this work the PSWs need to be changed with via address 0 in form of the S390_lowcore definition. Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NCornelia Huck <cohuck@redhat.com> Fixes: 94f85ed3e2f8 ("s390/setup: fix early warning messages") Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Martin Schwidefsky 提交于
commit 8727638426b0aea59d7f904ad8ddf483f9234f88 upstream. The setup_lowcore() function creates a new prefix page for the boot CPU. The PSW mask for the system_call, external interrupt, i/o interrupt and the program check handler have the DAT bit set in this new prefix page. At the time setup_lowcore is called the system still runs without virtual address translation, the paging_init() function creates the kernel page table and loads the CR13 with the kernel ASCE. Any code between setup_lowcore() and the end of paging_init() that has a BUG or WARN statement will create a program check that can not be handled correctly as there is no kernel page table yet. To allow early WARN statements initially setup the lowcore with DAT off and set the DAT bit only after paging_init() has completed. Cc: stable@vger.kernel.org Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 31 1月, 2019 1 次提交
-
-
由 Christian Borntraeger 提交于
commit 03aa047ef2db4985e444af6ee1c1dd084ad9fb4c upstream. Right now the early machine detection code check stsi 3.2.2 for "KVM" and set MACHINE_IS_VM if this is different. As the console detection uses diagnose 8 if MACHINE_IS_VM returns true this will crash Linux early for any non z/VM system that sets a different value than KVM. So instead of assuming z/VM, do not set any of MACHINE_IS_LPAR, MACHINE_IS_VM, or MACHINE_IS_KVM. CC: stable@vger.kernel.org Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 6月, 2018 1 次提交
-
-
由 Vasily Gorbik 提交于
Introduce PARMAREA_END, and use it for memblock reserve of low memory, which is used for lowcore, kdump data mover code and page buffer, early stack and parmarea. There is no need to reserve an area between PARMAREA_END and the decompressor _ehead. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 16 4月, 2018 1 次提交
-
-
由 Heiko Carstens 提交于
Just add the new machine type number to the two places that matter. Cc: <stable@vger.kernel.org> # v4.14+ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 4月, 2018 1 次提交
-
-
由 Martin Schwidefsky 提交于
With CONFIG_EXPOLINE_AUTO=y the call of spectre_v2_auto_early() via early_initcall is done *after* the early_param functions. This overwrites any settings done with the nobp/no_spectre_v2/spectre_v2 parameters. The code patching for the kernel is done after the evaluation of the early parameters but before the early_initcall is done. The end result is a kernel image that is patched correctly but the kernel modules are not. Make sure that the nospec auto detection function is called before the early parameters are evaluated and before the code patching is done. Fixes: 6e179d64 ("s390: add automatic detection of the spectre defense") Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 19 3月, 2018 1 次提交
-
-
由 Farhan Ali 提交于
The S390 architecture does not support any graphics hardware, but with the latest support for Virtio GPU in Linux and Virtio GPU emulation in QEMU, it's possible to enable graphics for S390 using the Virtio GPU device. To enable display we need to enable the Linux Virtual Terminal (VT) layer for S390. But the VT subsystem initializes quite early at boot so we need a dummy console driver till the Virtio GPU driver is initialized and we can run the framebuffer console. The framebuffer console over a Virtio GPU device can be run in combination with the serial SCLP console (default on S390). The SCLP console can still be accessed by management applications (eg: via Libvirt's virsh console). Signed-off-by: NFarhan Ali <alifm@linux.vnet.ibm.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NThomas Huth <thuth@redhat.com> Message-Id: <e23b61f4f599ba23881727a1e8880e9d60cc6a48.1519315352.git.alifm@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 3月, 2018 1 次提交
-
-
由 Farhan Ali 提交于
The S390 architecture does not support any graphics hardware, but with the latest support for Virtio GPU in Linux and Virtio GPU emulation in QEMU, it's possible to enable graphics for S390 using the Virtio GPU device. To enable display we need to enable the Linux Virtual Terminal (VT) layer for S390. But the VT subsystem initializes quite early at boot so we need a dummy console driver till the Virtio GPU driver is initialized and we can run the framebuffer console. The framebuffer console over a Virtio GPU device can be run in combination with the serial SCLP console (default on S390). The SCLP console can still be accessed by management applications (eg: via Libvirt's virsh console). Signed-off-by: NFarhan Ali <alifm@linux.vnet.ibm.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NThomas Huth <thuth@redhat.com> Message-Id: <e23b61f4f599ba23881727a1e8880e9d60cc6a48.1519315352.git.alifm@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 2月, 2018 1 次提交
-
-
由 Vasily Gorbik 提交于
Common code defines linker symbols which denote sections start/end in a form of char []. Referencing those symbols as _symbol or &_symbol yields the same result, but "_symbol" form is more widespread across newly written code. Convert s390 specific code to this style. Also removes unused _text symbol definition in boot/compressed/misc.c. Signed-off-by: NVasily Gorbik <gor@linux.vnet.ibm.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 2月, 2018 1 次提交
-
-
由 Martin Schwidefsky 提交于
Add CONFIG_EXPOLINE to enable the use of the new -mindirect-branch= and -mfunction_return= compiler options to create a kernel fortified against the specte v2 attack. With CONFIG_EXPOLINE=y all indirect branches will be issued with an execute type instruction. For z10 or newer the EXRL instruction will be used, for older machines the EX instruction. The typical indirect call basr %r14,%r1 is replaced with a PC relative call to a new thunk brasl %r14,__s390x_indirect_jump_r1 The thunk contains the EXRL/EX instruction to the indirect branch __s390x_indirect_jump_r1: exrl 0,0f j . 0: br %r1 The detour via the execute type instruction has a performance impact. To get rid of the detour the new kernel parameter "nospectre_v2" and "spectre_v2=[on,off,auto]" can be used. If the parameter is specified the kernel and module code will be patched at runtime. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 05 2月, 2018 1 次提交
-
-
由 Martin Schwidefsky 提交于
To be able to switch off specific CPU alternatives with kernel parameters make a copy of the facility bit mask provided by STFLE and use the copy for the decision to apply an alternative. Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 24 11月, 2017 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
It's good to have SPDX identifiers in all files to make it easier to audit the kernel tree for correct licenses. Update the arch/s390/kernel/ files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This work is based on a script and data from Thomas Gleixner, Philippe Ombredanne, and Kate Stewart. Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Philippe Ombredanne <pombredanne@nexb.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 11月, 2017 2 次提交
-
-
由 Heiko Carstens 提交于
Just use MACHINE_HAS_TE to decide if HWCAP_S390_TE needs to be added to elf_hwcap. Suggested-by: NDan Horák <dan@danny.cz> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Christian Borntraeger 提交于
With commit 7fb2b2d5 ("s390/virtio: remove the old KVM virtio transport") the pre-ccw virtio transport for s390 was removed. To complete the removal the uapi header file that contains the related data structures must also be removed. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 19 10月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
The machine check extended save area is needed to store the vector registers and the guarded storage control block when a CPU is interrupted by a machine check. Move the slab cache allocation of the full save area to nmi.c, for early boot use a static __initdata block. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 18 10月, 2017 2 次提交
-
-
由 Martin Schwidefsky 提交于
The boot_vdso_data variable is related to the vdso code, the magic of the initial vdso area for the early boot and the replacement of it in vdso_init should all be put into vdso.c. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> -
由 Vasily Gorbik 提交于
Implement CPU alternatives, which allows to optionally patch newer instructions at runtime, based on CPU facilities availability. A new kernel boot parameter "noaltinstr" disables patching. Current implementation is derived from x86 alternatives. Although ideal instructions padding (when altinstr is longer then oldinstr) is added at compile time, and no oldinstr nops optimization has to be done at runtime. Also couple of compile time sanity checks are done: 1. oldinstr and altinstr must be <= 254 bytes long, 2. oldinstr and altinstr must not have an odd length. alternative(oldinstr, altinstr, facility); alternative_2(oldinstr, altinstr1, facility1, altinstr2, facility2); Both compile time and runtime padding consists of either 6/4/2 bytes nop or a jump (brcl) + 2 bytes nop filler if padding is longer then 6 bytes. .altinstructions and .altinstr_replacement sections are part of __init_begin : __init_end region and are freed after initialization. Signed-off-by: NVasily Gorbik <gor@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 9月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
The queued spinlock code for s390 follows the principles of the common code qspinlock implementation but with a few notable differences. The format of the spinlock_t locking word differs, s390 needs to store the logical CPU number of the lock holder in the spinlock_t to be able to use the diagnose 9c directed yield hypervisor call. The inline code sequences for spin_lock and spin_unlock are nice and short. The inline portion of a spin_lock now typically looks like this: lhi %r0,0 # 0 indicates an empty lock l %r1,0x3a0 # CPU number + 1 from lowcore cs %r0,%r1,<some_lock> # lock operation jnz call_wait # on failure call wait function locked: ... call_wait: la %r2,<some_lock> brasl %r14,arch_spin_lock_wait j locked A spin_unlock is as simple as before: lhi %r0,0 sth %r0,2(%r2) # unlock operation After a CPU has queued itself it may not enable interrupts again for the arch_spin_lock_flags() variant. The arch_spin_lock_wait_flags wait function is removed. To improve performance the code implements opportunistic lock stealing. If the wait function finds a spinlock_t that indicates that the lock is free but there are queued waiters, the CPU may steal the lock up to three times without queueing itself. The lock stealing update the steal counter in the lock word to prevent more than 3 steals. The counter is reset at the time the CPU next in the queue successfully takes the lock. While the queued spinlocks improve performance in a system with dedicated CPUs, in a virtualized environment with continuously overcommitted CPUs the queued spinlocks can have a negative effect on performance. This is due to the fact that a queued CPU that is preempted by the hypervisor will block the queue at some point even without holding the lock. With the classic spinlock it does not matter if a CPU is preempted that waits for the lock. Therefore use the queued spinlock code only if the system runs with dedicated CPUs and fall back to classic spinlocks when running with shared CPUs. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 8月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
If memory is fragmented it is unlikely that large order memory allocations succeed. This has been an issue with the vmcp device driver since a long time, since it requires large physical contiguous memory ares for large responses. To hopefully resolve this issue make use of the contiguous memory allocator (cma). This patch adds a vmcp specific vmcp cma area with a default size of 4MB. The size can be changed either via the VMCP_CMA_SIZE config option at compile time or with the "vmcp_cma" kernel parameter (e.g. "vmcp_cma=16m"). For any vmcp response buffers larger than 16k memory from the cma area will be allocated. If such an allocation fails, there is a fallback to the buddy allocator. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 7月, 2017 3 次提交
-
-
由 Martin Schwidefsky 提交于
Add detection for machine type 0x3906 and set the ELF platform name to z14. Add the miscellaneous-instruction-extension 2 facility to the list of facilities for z14. And allow to generate code that only runs on a z14 machine. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> -
由 Martin Schwidefsky 提交于
The TOD epoch extension adds 8 epoch bits to the TOD clock to provide a continuous clock after 2042/09/17. The store-clock-extended (STCKE) instruction will store the epoch index in the first byte of the 16 bytes stored by the instruction. The read_boot_clock64 and the read_presistent_clock64 functions need to take the additional bits into account to give the correct result after 2042/09/17. The clock-comparator register will stay 64 bit wide. The comparison of the clock-comparator with the TOD clock is limited to bytes 1 to 8 of the extended TOD format. To deal with the overflow problem due to an epoch change the clock-comparator sign control in CR0 can be used to switch the comparison of the 64-bit TOD clock with the clock-comparator to a signed comparison. The decision between the signed vs. unsigned clock-comparator comparisons is done at boot time. Only if the TOD clock is in the second half of a 142 year epoch the signed comparison is used. This solves the epoch overflow issue as long as the machine is booted at least once in an epoch. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> -
由 Heiko Carstens 提交于
Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 13 7月, 2017 1 次提交
-
-
由 Xunlei Pang 提交于
As Eric said, "what we need to do is move the variable vmcoreinfo_note out of the kernel's .bss section. And modify the code to regenerate and keep this information in something like the control page. Definitely something like this needs a page all to itself, and ideally far away from any other kernel data structures. I clearly was not watching closely the data someone decided to keep this silly thing in the kernel's .bss section." This patch allocates extra pages for these vmcoreinfo_XXX variables, one advantage is that it enhances some safety of vmcoreinfo, because vmcoreinfo now is kept far away from other kernel data structures. Link: http://lkml.kernel.org/r/1493281021-20737-1-git-send-email-xlpang@redhat.comSigned-off-by: NXunlei Pang <xlpang@redhat.com> Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: NJuergen Gross <jgross@suse.com> Suggested-by: NEric Biederman <ebiederm@xmission.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Young <dyoung@redhat.com> Cc: Hari Bathini <hbathini@linux.vnet.ibm.com> Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 3月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
This adds a new system call to enable the use of guarded storage for user space processes. The system call takes two arguments, a command and pointer to a guarded storage control block: s390_guarded_storage(int command, struct gs_cb *gs_cb); The second argument is relevant only for the GS_SET_BC_CB command. The commands in detail: 0 - GS_ENABLE Enable the guarded storage facility for the current task. The initial content of the guarded storage control block will be all zeros. After the enablement the user space code can use load-guarded-storage-controls instruction (LGSC) to load an arbitrary control block. While a task is enabled the kernel will save and restore the current content of the guarded storage registers on context switch. 1 - GS_DISABLE Disables the use of the guarded storage facility for the current task. The kernel will cease to save and restore the content of the guarded storage registers, the task specific content of these registers is lost. 2 - GS_SET_BC_CB Set a broadcast guarded storage control block. This is called per thread and stores a specific guarded storage control block in the task struct of the current task. This control block will be used for the broadcast event GS_BROADCAST. 3 - GS_CLEAR_BC_CB Clears the broadcast guarded storage control block. The guarded- storage control block is removed from the task struct that was established by GS_SET_BC_CB. 4 - GS_BROADCAST Sends a broadcast to all thread siblings of the current task. Every sibling that has established a broadcast guarded storage control block will load this control block and will be enabled for guarded storage. The broadcast guarded storage control block is used up, a second broadcast without a refresh of the stored control block with GS_SET_BC_CB will not have any effect. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 02 3月, 2017 2 次提交
-
-
由 Ingo Molnar 提交于
But first introduce a trivial header and update usage sites. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/task.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 2月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
Both MACHINE_HAS_PFMF and MACHINE_HAS_HPAGE are just an alias for MACHINE_HAS_EDAT1. So simply use MACHINE_HAS_EDAT1 instead. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 2月, 2017 2 次提交
-
-
由 Martin Schwidefsky 提交于
Add hardware capability bits and feature tags to /proc/cpuinfo for the "Vector Packed Decimal Facility" (tag "vxd" / hwcap bit 2^12) and the "Vector Enhancements Facility 1" (tag "vxe" / hwcap bit 2^13). Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> -
由 Heiko Carstens 提交于
The current implementation of setup_randomness uses the stack address and therefore the pointer to the SYSIB 3.2.2 block as input data address. Furthermore the length of the input data is the number of virtual-machine description blocks which is typically one. This means that typically a single zero byte is fed to add_device_randomness. Fix both of these and use the address of the first virtual machine description block as input data address and also use the correct length. Fixes: bcfcbb6b ("s390: add system information as device randomness") Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 2月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
Commit bcfcbb6b ("s390: add system information as device randomness") intended to add some virtual machine specific information to the randomness pool. Unfortunately it uses the page allocator before it is ready to use. In result the page allocator always returns NULL and the setup_randomness function never adds anything to the randomness pool. To fix this use memblock_alloc and memblock_free instead. Fixes: bcfcbb6b ("s390: add system information as device randomness") Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 16 1月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
reserve_initrd currently calls memblock_reserve even if the to be reserved size is zero. Even though the memblock core code can handle this correctly, it still yields confusing debug messages if memblock debugging is enabled. Therefore make sure to not call memblock_reserve with a size of zero. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 14 12月, 2016 1 次提交
-
-
由 Martin Schwidefsky 提交于
Two of the messages introduced by the memblock conversion are reworded. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 12月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
Initialize the cpu topology and therefore also the cpu to node mapping much earlier. Fixes this warning and subsequent crashes when using the fake numa emulation mode on s390: WARNING: CPU: 0 PID: 1 at include/linux/cpumask.h:121 select_task_rq+0xe6/0x1a8 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.6.0-rc6-00001-ge9d867a6-dirty #28 task: 00000001dd270008 ti: 00000001eccb4000 task.ti: 00000001eccb4000 Krnl PSW : 0404c00180000000 0000000000176c56 (select_task_rq+0xe6/0x1a8) R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3 Call Trace: ([<0000000000176c30>] select_task_rq+0xc0/0x1a8) ([<0000000000177d64>] try_to_wake_up+0x2e4/0x478) ([<000000000015d46c>] create_worker+0x174/0x1c0) ([<0000000000161a98>] alloc_unbound_pwq+0x360/0x438) ([<0000000000162550>] apply_wqattrs_prepare+0x200/0x2a0) ([<000000000016266a>] apply_workqueue_attrs_locked+0x7a/0xb0) ([<0000000000162af0>] apply_workqueue_attrs+0x50/0x78) ([<000000000016441c>] __alloc_workqueue_key+0x304/0x520) ([<0000000000ee3706>] default_bdi_init+0x3e/0x70) ([<0000000000100270>] do_one_initcall+0x140/0x1d8) ([<0000000000ec9da8>] kernel_init_freeable+0x220/0x2d8) ([<0000000000984a7a>] kernel_init+0x2a/0x150) ([<00000000009913fa>] kernel_thread_starter+0x6/0xc) ([<00000000009913f4>] kernel_thread_starter+0x0/0xc) Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
In order to be able to setup the cpu to node mappings early it is a prerequisite to know which cpus are present. Therefore cpus must be detected much earlier than before. For sclp based cpu detection this requires yet another early sclp call, since the system is not ready to use the regular interrupt and memory allocations. Reviewed-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 02 12月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
When converting from bootmem to memblock I missed a subtle difference: the memblock_alloc() functions return uninitialized memory, while the memblock_virt_alloc() functions return zeroed memory. This led to quite random early boot crashes. Therefore use the correct version everywhere now. Hopefully. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When re-adding crash kernel memory within setup_resources() the function memblock_add() is used. That function will add memory by default to node "MAX_NUMNODES" instead of node 0, like the memory detection code does. In case of !NUMA this will trigger this warning when the kernel generates the vmemmap: Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead WARNING: CPU: 0 PID: 0 at mm/memblock.c:1261 memblock_virt_alloc_internal+0x76/0x220 CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc6 #16 Call Trace: [<0000000000d0b2e8>] memblock_virt_alloc_try_nid+0x88/0xc8 [<000000000083c8ea>] __earlyonly_bootmem_alloc.constprop.1+0x42/0x50 [<000000000083e7f4>] vmemmap_populate+0x1ac/0x1e0 [<0000000000840136>] sparse_mem_map_populate+0x46/0x68 [<0000000000d0c59c>] sparse_init+0x184/0x238 [<0000000000cf45f6>] paging_init+0xbe/0xf8 [<0000000000cf1d4a>] setup_arch+0xa02/0xae0 [<0000000000ced75a>] start_kernel+0x72/0x450 [<0000000000100020>] _stext+0x20/0x80 If NUMA is selected numa_setup_memory() will fix the node assignments before the vmemmap will be populated; so this warning will only appear if NUMA is not selected. To fix this simply use memblock_add_node() and re-add crash kernel memory explicitly to node 0. Reported-and-tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Fixes: 4e042af4 ("s390/kexec: fix crash on resize of reserved memory") Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 11月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
Get rid of all remaining alloc_bootmem calls and use memblock_alloc instead everywhere. This way we get rid of the inconsistent mixture of alloc_bootmem and memblock_alloc usages. Two of the alloc_bootmem_low calls within arch/s390/kernel/setup.c are replaced with memblock_alloc calls that don't enforce that the allocated memory is below 2GB. This restriction was never necessary. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 11月, 2016 1 次提交
-
-
由 Heiko Carstens 提交于
In order to make the cma infrastructure usable we need to add a small architecture backend which calls dma_contiguous_reserve. Otherwise we would end up with the cma allocator enabled, but no pool where memory can be allocated from. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 11月, 2016 2 次提交
-
-
由 Heiko Carstens 提交于
This is the s390 variant of commit 15f4eae7 ("x86: Move thread_info into task_struct"). Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Convert s390 to use a field in the struct lowcore for the CPU preemption count. It is a bit cheaper to access a lowcore field compared to a thread_info variable and it removes the depencency on a task related structure. bloat-o-meter on the vmlinux image for the default configuration (CONFIG_PREEMPT_NONE=y) reports a small reduction in text size: add/remove: 0/0 grow/shrink: 18/578 up/down: 228/-5448 (-5220) A larger improvement is achieved with the default configuration but with CONFIG_PREEMPT=y and CONFIG_DEBUG_PREEMPT=n: add/remove: 2/6 grow/shrink: 59/4477 up/down: 1618/-228762 (-227144) Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-