- 18 12月, 2015 3 次提交
-
-
由 Heiko Carstens 提交于
Modifying the architecture level set facility lists was always very error prone. Given the numbering of the facility bits within the Principles of Operation, where the most significant bit number is 0, it happened a lot of times that wrong bits were set or cleared. Therefore this patch adds a tool "gen_facilities" which generates include/generated/facilites.h. The definition of the bits to be set is contained within arch/s390/include/asm/facilities_src.h and can be easily extended to e.g. also generate such lists for the KVM module. The generated file looks like this: #define FACILITIES_ALS _AC(0xc1006450f0040000,UL) #define FACILITIES_ALS_DWORDS 1 The facility bits defined in this patch match 1:1 to the current masks that can be found in head.S. That is if the tool gets executed with -march=z990 then the generated masks will equal the masks in head.S for CONFIG_MARCH_Z990. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
head.s contains an stfle instruction which stores it result at the storage location that is assigned to the stfl instruction. This is currently no problem, since we only care about one double word. However if the number of double words in the ALS bitfield grows the current code is not very stable. E.g. before issuing the stfle command the memory to which it stores must be cleared, since the instruction may or may not clear memory contents where no bits are set. In order to simplify the code a bit always use the storage location that we reserved for the stfle result. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Now that 31 bit support is gone, the assembler always knows about the stfl instruction. Therefore lets use a readable mnemonic. Also remove the not needed extable entry for the inline assembly and fix the output constraint. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 11月, 2015 16 次提交
-
-
由 Martin Schwidefsky 提交于
It does not make sense to try to relinquish the time slice with diag 0x9c to a CPU in a state that does not allow to schedule the CPU. The scenario where this can happen is a CPU waiting in udelay/mdelay while holding a spin-lock. Add a CIF bit to tag a CPU in enabled wait and use it to detect that the yield of a CPU will not be successful and skip the diagnose call. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
is_32bit_task() used to be helpful when we still had CONFIG_32BIT. Since that is gone, it is nowadays identical to is_compat_task(). So remove it. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sascha Silbe 提交于
When running under qemu with the default configuration (-nographic), there is only a VT220 SCLP console, no line-mode SCLP console. Add VT220 support to the early SCLP console so the user has a chance to see critical error messages during early boot. None of the existing users of _sclp_print_early() check the return code. Instead of trying to come up with return code semantics when printing to multiple consoles (any or all of which may fail), we just drop the return code entirely. Tested on z/VM (line mode console) and LPAR (VT220 and line mode console). Tested on qemu/KVM with VT220 console and / or line mode console. Signed-off-by: NSascha Silbe <silbe@linux.vnet.ibm.com> Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
Since commit b006f19b ("lib/vsprintf.c: handle invalid format specifiers more robustly") I get errors like [...] Krnl Code: 00000000004e2410: c00400000000 brcl 0,4e2410 Please remove unsupported %r in format string [ 8.179483] ------------[ cut here ]------------ [ 8.179484] WARNING: at lib/vsprintf.c:1781 Turns out that our disassembler relied on %r not being used as format string. Let's do the proper escaping of our decode buffers. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Gerald Schaefer 提交于
DMA addresses returned from map_page() are calculated by using an iommu bitmap plus a start_dma offset. The size of this bitmap is based on the main memory size. If we have more than (4 TB - start_dma) main memory, the DMA address calculation will also produce addresses > 4 TB. Such addresses cannot be inserted in the 3-level DMA page table, instead the entries modulo 4 TB will be overwritten. Fix this by restricting the iommu bitmap size to (4 TB - start_dma). Also set zdev->end_dma to the actual end address of the usable range, instead of the theoretical maximum as reported by the hardware, which fixes a sanity check in dma_map() and also the IOMMU API domain geometry aperture calculation. Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Reviewed-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 David Hildenbrand 提交于
Let's annotate it correctly, so we directly get a warning if we ever were to use it in atomic/preempt_disable/spinlock environment. Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The spinlock implementation calls the diagnose 0x9c / 0x44 immediately if the SIGP sense running reported the target CPU as not running. The diagnose 0x9c is a hint to the hypervisor to schedule the target CPU in preference to the source CPU that issued the diagnose. It can happen that on return from the diagnose the target CPU has not been scheduled yet, e.g. if the target logical CPU is on another physical CPU and the hypervisor did not want to migrate the logical CPU. Avoid the immediate repeat of the diagnose instruction, instead do the retry loop before the next invocation of diagnose 0x9c. Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Introduce save_area_alloc(), save_area_boot_cpu(), save_area_add_regs() and save_area_add_vxrs to deal with storing the CPU state in case of a system dump. Remove struct save_area and save_area_ext, and create a new struct save_area as a local definition to arch/s390/kernel/crash_dump.c. Copy each individual field from the hardware status area to the save area, storing the minimum of required data. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
To collect the CPU registers of the crashed system allocated a single page with memblock_alloc_base and use it as a copy buffer. Replace the stop-and-store-status sigp with a store-status-at-address sigp in smp_save_dump_cpus() and smp_store_status(). In both cases the target CPU is already stopped and store-status-at-address avoids the detour via the absolute zero page. For kexec simplify s390_reset_system and call store_status() before the prefix register of the boot CPU has been set to zero. Use STPX to store the prefix register and remove dump_prefix_page. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Replace the SAVE_AREA_BASE offset calculations in reipl.S with the assembler constant for the location of each register status area. Use __LC_FPREGS_SAVE_AREA instead of SAVE_AREA_BASE in the three remaining code locations and remove the definition of SAVE_AREA_BASE. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Replace the offsets based on the struct area_area with the offset constants from asm-offsets.c based on the struct _lowcore. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Introduce two copy functions for the memory of the dumped system, copy_oldmem_kernel() to copy to the virtual kernel address space and copy_oldmem_user() to copy to user space. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The s390 architecture can store the CPU registers of the crashed system after the kdump kernel has been started and this is the preferred way. Remove the remaining code fragments that deal with storing CPU registers while the crashed system is still active. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
New versions of the SCSI dumper use the /dev/vmcore interface instead of zcore mem. Remove the outdated interface. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The /sys/kernel/debug/zcore/mem interface delivers the memory of the old system with the CPU registers stored to the assigned locations in each prefix page. For the vector registers the prefix page of each CPU has an address of a 1024 byte save area at 0x11b0. But the /sys/kernel/debug/zcore/mem interface fails copy the vector registers saved at boot of the zfcpdump kernel into the dump image. Copy the saved vector registers of a CPU to the outout buffer if the memory area that is read via /sys/kernel/debug/zcore/mem intersects with the vector register save area of this CPU. Acked-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 11月, 2015 7 次提交
-
-
由 Haozhong Zhang 提交于
This patch removes the vpid check when emulating nested invvpid instruction of type all-contexts invalidation. The existing code is incorrect because: (1) According to Intel SDM Vol 3, Section "INVVPID - Invalidate Translations Based on VPID", invvpid instruction does not check vpid in the invvpid descriptor when its type is all-contexts invalidation. (2) According to the same document, invvpid of type all-contexts invalidation does not require there is an active VMCS, so/and get_vmcs12() in the existing code may result in a NULL-pointer dereference. In practice, it can crash both KVM itself and L1 hypervisors that use invvpid (e.g. Xen). Signed-off-by: NHaozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Mark Rutland 提交于
If we call __kvm_hyp_panic while a guest context is active, we call __restore_sysregs before acquiring the system register values for the panic, in the process throwing away the PAR_EL1 value at the point of the panic. This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to restoring host register values, enabling us to report the original values at the point of the panic. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Mark Rutland 提交于
Currently __kvm_hyp_panic uses %p for values which are not pointers, such as the ESR value. This can confusingly lead to "(null)" being printed for the value. Use %x instead, and only use %p for host pointers. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
We were setting the physical active state on the GIC distributor in a preemptible section, which could cause us to set the active state on different physical CPU from the one we were actually going to run on, hacoc ensues. Since we are no longer descheduling/scheduling soft timers in the flush/sync timer functions, simply moving the timer flush into a non-preemptible section. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults when a Stage 1 permission fault or device alignment fault should have been reported. This patch implements the workaround (which is to validate that the Stage-1 translation actually succeeds) by using code patching. Cc: stable@vger.kernel.org Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
When running a 32bit guest under a 64bit hypervisor, the ARMv8 architecture defines a mapping of the 32bit registers in the 64bit space. This includes banked registers that are being demultiplexed over the 64bit ones. On exceptions caused by an operation involving a 32bit register, the HW exposes the register number in the ESR_EL2 register. It was so far understood that SW had to distinguish between AArch32 and AArch64 accesses (based on the current AArch32 mode and register number). It turns out that I misinterpreted the ARM ARM, and the clue is in D1.20.1: "For some exceptions, the exception syndrome given in the ESR_ELx identifies one or more register numbers from the issued instruction that generated the exception. Where the exception is taken from an Exception level using AArch32 these register numbers give the AArch64 view of the register." Which means that the HW is already giving us the translated version, and that we shouldn't try to interpret it at all (for example, doing an MMIO operation from the IRQ mode using the LR register leads to very unexpected behaviours). The fix is thus not to perform a call to vcpu_reg32() at all from vcpu_reg(), and use whatever register number is supplied directly. The only case we need to find out about the mapping is when we actively generate a register access, which only occurs when injecting a fault in a guest. Cc: stable@vger.kernel.org Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Ard Biesheuvel 提交于
The open coded tests for checking whether a PTE maps a page as uncached use a flawed '(pte_val(xxx) & CONST) != CONST' pattern, which is not guaranteed to work since the type of a mapping is not a set of mutually exclusive bits For HYP mappings, the type is an index into the MAIR table (i.e, the index itself does not contain any information whatsoever about the type of the mapping), and for stage-2 mappings it is a bit field where normal memory and device types are defined as follows: #define MT_S2_NORMAL 0xf #define MT_S2_DEVICE_nGnRE 0x1 I.e., masking *and* comparing with the latter matches on the former, and we have been getting lucky merely because the S2 device mappings also have the PTE_UXN bit set, or we would misidentify memory mappings as device mappings. Since the unmap_range() code path (which contains one instance of the flawed test) is used both for HYP mappings and stage-2 mappings, and considering the difference between the two, it is non-trivial to fix this by rewriting the tests in place, as it would involve passing down the type of mapping through all the functions. However, since HYP mappings and stage-2 mappings both deal with host physical addresses, we can simply check whether the mapping is backed by memory that is managed by the host kernel, and only perform the D-cache maintenance if this is the case. Cc: stable@vger.kernel.org Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NPavel Fedin <p.fedin@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 22 11月, 2015 7 次提交
-
-
由 Helge Deller 提交于
Adjust the linker script and map_pages() to map kernel text and data on physical 1MB huge/large pages. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
This patch adds huge page support to allow userspace to allocate huge pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels. A later patch will add kernel support to map kernel text and data on huge pages. The only requirement is, that the kernel needs to be compiled for a PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support variable page sizes. 64bit Kernels are compiled for PA2.0 by default. Technically on parisc multiple physical huge pages may be needed to emulate standard 2MB huge pages. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
Use the 22bit instead of the 17bit branch instruction on a 64bit kernel to reach the do_syscall_trace_exit function from the gateway page. A huge page enabled kernel may need the additional branch distance bits. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
For the 64bit kernel the initially 16 MB kernel memory might become too small if you build a kernel with many modules built-in and with kernel text and data areas mapped on huge pages. This patch increases the initial mapping to 32MB for 64bit kernels and keeps 16MB for 32bit kernels. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
A fault vector on parisc needs to be 2K aligned. Furthermore the checksum of the fault vector needs to sum up to 0 which is being calculated and written at runtime. Up to now we aligned both PA20 and PA11 fault vectors on the same 4K page in order to easily write the checksum after having mapped the kernel read-only (by mapping this page only as read-write). But when we want to map the kernel text and data on huge pages this makes things harder. So, simplify it by aligning both fault vectors on 2K boundries and write the checksum before we map the page read-only. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
Huge pages on parisc will have the same size as one pmd table, which is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and on a 32bit kernel 4MB when used with 4K kernel pages. Since parisc does not physically supports 2MB huge page sizes, emulate it with two consecutive 1MB page sizes instead. Keeping the same huge page size as one pmd will allow us to add transparent huge page support later on. Bit 21 in the pte flags was unused and will now be used to mark a page as huge page (_PAGE_HPAGE_BIT). Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
Drop the MADV_xxK_PAGES flags, which were never used and were from a proposed API which was never integrated into the generic Linux kernel code. Cc: stable@vger.kernel.org Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 20 11月, 2015 6 次提交
-
-
由 Alban Bedel 提交于
As I'm using a board with a broken old bootloader I hardcoded the mips_machtype and did't notice that the machine entry was still missing. [ralf@linux-mips.org: Fixed spelling message noticed by Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>.] Signed-off-by: NAlban Bedel <albeu@free.fr> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Felix Fietkau <nbd@openwrt.org> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11503/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Alban Bedel 提交于
There is 2 registers that is 8 bytes long, not 4. Signed-off-by: NAlban Bedel <albeu@free.fr> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Alexander Couzens <lynxis@fe80.eu> Cc: Joel Porquet <joel@porquet.org> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11508/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Alban Bedel 提交于
The DDR control initialization needs to know the SoC type, however ath79_detect_sys_type() was called after ath79_ddr_ctrl_init(). Reverse the order to fix the DDR control initialization on ar71xx and ar934x. Signed-off-by: NAlban Bedel <albeu@free.fr> Cc: Felix Fietkau <nbd@openwrt.org> Cc: Qais Yousef <qais.yousef@imgtec.com> Cc: Andrew Bresticker <abrestic@chromium.org> CC: stable@vger.kernel.org # v4.2+ Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11500/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Helge Deller 提交于
The definition of start_thread_som was planned to be used to execute HP-UX SOM binaries. Since HP-UX compatibility was dropped with kernel 4.0 there is no need to carry it further. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
The first pmd entry is marked with PxD_FLAG_ATTACHED instead of _PAGE_GATEWAY. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Yang Shi 提交于
As previously reported, some userspace applications depend on bogomips showed by /proc/cpuinfo. Although there is much less legacy impact on aarch64 than arm, it does break libvirt. This patch reverts commit 326b16db ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo"), but with some tweak due to context change and without the pr_info(). Fixes: 326b16db ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo") Signed-off-by: NYang Shi <yang.shi@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.12+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 11月, 2015 1 次提交
-
-
由 David Hildenbrand 提交于
For now, VCPUs were always created sequentially with incrementing VCPU ids. Therefore, the index in the VCPUs array matched the id. As sequential creation might change with cpu hotplug, let's use the correct lookup function to find a VCPU by id, not array index. Let's also use kvm_lookup_vcpu() for validation of the sending VCPU on external call injection. Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Cc: stable@vger.kernel.org # db27a7a3 KVM: Provide function for VCPU lookup by id
-