- 09 8月, 2019 10 次提交
-
-
由 Steve Capper 提交于
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In a later patch we will need to have a slightly larger VMEMMAP region to accommodate boot time selection between 48/52-bit kernel VAs. This patch modifies the formula for computing VMEMMAP_SIZE to depend explicitly on the PAGE_OFFSET and start of kernel addressable memory. (This allows for a slightly larger direct linear map in future). Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
vmemmap is a preprocessor definition that depends on a variable, memstart_addr. In a later patch we will need to expand the size of the VMEMMAP region and optionally modify vmemmap depending upon whether or not hardware support is available for 52-bit virtual addresses. This patch changes vmemmap to be a variable. As the old definition depended on a variable load, this should not affect performance noticeably. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces conditional logic to offset_ttbr1 to query SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW support for 52-bit VAs then the ttbr1 offset is skipped. We choose to read a system register rather than vabits_actual because offset_ttbr1 can be called in places where the kernel data is not actually mapped. Calls to offset_ttbr1 appear to be made from rarely called code paths so this extra logic is not expected to adversely affect performance. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to support 52-bit kernel addresses detectable at boot time, the kernel needs to know the most conservative VA_BITS possible should it need to fall back to this quantity due to lack of hardware support. A new compile time constant VA_BITS_MIN is introduced in this patch and it is employed in the KASAN end address, KASLR, and EFI stub. For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits. In other words: VA_BITS_MIN = min (48, VA_BITS) Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
The kernel page table dumper assumes that the placement of VA regions is constant and determined at compile time. As we are about to introduce variable VA logic, we need to be able to determine certain regions at boot time. Specifically the VA_START and KASAN_SHADOW_START will depend on whether or not the system is booted with 52-bit kernel VAs. This patch adds logic to the kernel page table dumper s.t. these regions can be computed at boot time. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command line argument and affects the codegen of the inline address sanetiser. Essentially, for an example memory access: *ptr1 = val; The compiler will insert logic similar to the below: shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET) if (somethingWrong(shadowValue)) flagAnError(); This code sequence is inserted into many places, thus KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel text. If we want to run a single kernel binary with multiple address spaces, then we need to do this with KASAN_SHADOW_OFFSET fixed. Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide shadow addresses we know that the end of the shadow region is constant w.r.t. VA space size: KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET This means that if we increase the size of the VA space, the start of the KASAN region expands into lower addresses whilst the end of the KASAN region is fixed. Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via build scripts with the VA size used as a parameter. (There are build time checks in the C code too to ensure that expected values are being derived). It is sufficient, and indeed is a simplification, to remove the build scripts (and build time checks) entirely and instead provide KASAN_SHADOW_OFFSET values. This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the arm64 Makefile, and instead we adopt the approach used by x86 to supply offset values in kConfig. To help debug/develop future VA space changes, the Makefile logic has been preserved in a script file in the arm64 Documentation folder. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
In order to allow for a KASAN shadow that changes size at boot time, one must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the start address. Also, it is highly desirable to maintain the same function addresses in the kernel .text between VA sizes. Both of these requirements necessitate us to flip the kernel address space halves s.t. the direct linear map occupies the lower addresses. This patch puts the direct linear map in the lower addresses of the kernel VA range and everything else in the higher ranges. We need to adjust: *) KASAN shadow region placement logic, *) KASAN_SHADOW_OFFSET computation logic, *) virt_to_phys, phys_to_virt checks, *) page table dumper. These are all small changes, that need to take place atomically, so they are bundled into this commit. As part of the re-arrangement, a guard region of 2MB (to preserve alignment for fixed map) is added after the vmemmap. Otherwise the vmemmap could intersect with IS_ERR pointers. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
Currently there are assumptions about the alignment of VMEMMAP_START and PAGE_OFFSET that won't be valid after this series is applied. These assumptions are in the form of bitwise operators being used instead of addition and subtraction when calculating addresses. This patch replaces these bitwise operators with addition/subtraction. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 02 8月, 2019 2 次提交
-
-
由 Masami Hiramatsu 提交于
Make debug exceptions visible from RCU so that synchronize_rcu() correctly track the debug exception handler. This also introduces sanity checks for user-mode exceptions as same as x86's ist_enter()/ist_exit(). The debug exception can interrupt in idle task. For example, it warns if we put a kprobe on a function called from idle task as below. The warning message showed that the rcu_read_lock() caused this problem. But actually, this means the RCU is lost the context which is already in NMI/IRQ. /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable /sys/kernel/debug/tracing # [ 135.122237] [ 135.125035] ============================= [ 135.125310] WARNING: suspicious RCU usage [ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted [ 135.125904] ----------------------------- [ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle! [ 135.126839] [ 135.126839] other info that might help us debug this: [ 135.126839] [ 135.127410] [ 135.127410] RCU used illegally from idle CPU! [ 135.127410] rcu_scheduler_active = 2, debug_locks = 1 [ 135.128114] RCU used illegally from extended quiescent state! [ 135.128555] 1 lock held by swapper/0/0: [ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178 [ 135.130499] [ 135.130499] stack backtrace: [ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20 [ 135.131841] Hardware name: linux,dummy-virt (DT) [ 135.132224] Call trace: [ 135.132491] dump_backtrace+0x0/0x140 [ 135.132806] show_stack+0x24/0x30 [ 135.133133] dump_stack+0xc4/0x10c [ 135.133726] lockdep_rcu_suspicious+0xf8/0x108 [ 135.134171] call_break_hook+0x170/0x178 [ 135.134486] brk_handler+0x28/0x68 [ 135.134792] do_debug_exception+0x90/0x150 [ 135.135051] el1_dbg+0x18/0x8c [ 135.135260] default_idle_call+0x0/0x44 [ 135.135516] cpu_startup_entry+0x2c/0x30 [ 135.135815] rest_init+0x1b0/0x280 [ 135.136044] arch_call_rest_init+0x14/0x1c [ 135.136305] start_kernel+0x4d4/0x500 [ 135.136597] So make debug exception visible to RCU can fix this warning. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
kprobes manipulates the interrupted PSTATE for single step, and doesn't restore it. Thus, if we put a kprobe where the pstate.D (debug) masked, the mask will be cleared after the kprobe hits. Moreover, in the most complicated case, this can lead a kernel crash with below message when a nested kprobe hits. [ 152.118921] Unexpected kernel single-step exception at EL1 When the 1st kprobe hits, do_debug_exception() will be called. At this point, debug exception (= pstate.D) must be masked (=1). But if another kprobes hits before single-step of the first kprobe (e.g. inside user pre_handler), it unmask the debug exception (pstate.D = 0) and return. Then, when the 1st kprobe setting up single-step, it saves current DAIF, mask DAIF, enable single-step, and restore DAIF. However, since "D" flag in DAIF is cleared by the 2nd kprobe, the single-step exception happens soon after restoring DAIF. This has been introduced by commit 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") To solve this issue, this stores all DAIF bits and restore it after single stepping. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Fixes: 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 01 8月, 2019 8 次提交
-
-
由 Qian Cai 提交于
When CONFIG_KASAN_SW_TAGS=n, set_tag() is compiled away. GCC throws a warning, mm/kasan/common.c: In function '__kasan_kmalloc': mm/kasan/common.c:464:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable] u8 tag = 0xff; ^~~ Fix it by making __tag_set() a static inline function the same as arch_kasan_set_tag() in mm/kasan/kasan.h for consistency because there is a macro in arch/arm64/include/asm/kasan.h, #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) However, when CONFIG_DEBUG_VIRTUAL=n and CONFIG_SPARSEMEM_VMEMMAP=y, page_to_virt() will call __tag_set() with incorrect type of a parameter, so fix that as well. Also, still let page_to_virt() return "void *" instead of "const void *", so will not need to add a similar cast in lowmem_page_address(). Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Qian Cai 提交于
GCC throws a warning, arch/arm64/mm/mmu.c: In function 'pud_free_pmd_page': arch/arm64/mm/mmu.c:1033:8: warning: variable 'pud' set but not used [-Wunused-but-set-variable] pud_t pud; ^~~ because pud_table() is a macro and compiled away. Fix it by making it a static inline function and for pud_sect() as well. Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
Remove rcu_read_lock()/rcu_read_unlock() from debug exception handlers since we are sure those are not preemptible and interrupts are off. Acked-by: NPaul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Masami Hiramatsu 提交于
Prohibit probing on return_address() and subroutines which is called from return_address(), since the it is invoked from trace_hardirqs_off() which is also kprobe blacklisted. Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Julien Thierry 提交于
On a system with two security states, if SCR_EL3.FIQ is cleared, non-secure IRQ priorities get shifted to fit the secure view but priority masks aren't. On such system, it turns out that GIC_PRIO_IRQON masks the priority of normal interrupts, which obviously ends up in a hang. Increase GIC_PRIO_IRQON value (i.e. lower priority) to make sure interrupts are not blocked by it. Cc: Oleg Nesterov <oleg@redhat.com> Fixes: bd82d4bd ("arm64: Fix incorrect irqflag restore for priority masking") Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJulien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [will: fixed Fixes: tag] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Qian Cai 提交于
GCC throws out this warning on arm64. drivers/firmware/efi/libstub/arm-stub.c: In function 'efi_entry': drivers/firmware/efi/libstub/arm-stub.c:132:22: warning: variable 'si' set but not used [-Wunused-but-set-variable] Fix it by making free_screen_info() a static inline function. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
If CTR_EL0.{CWG,ERG} are 0b0000 then they must be interpreted to have their architecturally maximum values, which defeats the use of FTR_HIGHER_SAFE when sanitising CPU ID registers on heterogeneous machines. Introduce FTR_HIGHER_OR_ZERO_SAFE so that these fields effectively saturate at zero. Fixes: 3c739b57 ("arm64: Keep track of CPU feature registers") Cc: <stable@vger.kernel.org> # 4.4.x- Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vincenzo Frascino 提交于
Using an old .config in combination with "make oldconfig" can cause an incorrect detection of the compat compiler: $ grep CROSS_COMPILE_COMPAT .config CONFIG_CROSS_COMPILE_COMPAT_VDSO="" $ make oldconfig && make arch/arm64/Makefile:58: gcc not found, check CROSS_COMPILE_COMPAT. Stop. Accordingly to the section 7.2 of the GNU Make manual "Syntax of Conditionals", "When the value results from complex expansions of variables and functions, expansions you would consider empty may actually contain whitespace characters and thus are not seen as empty. However, you can use the strip function to avoid interpreting whitespace as a non-empty value." Fix the issue adding strip to the CROSS_COMPILE_COMPAT string evaluation. Reported-by: NMatteo Croce <mcroce@redhat.com> Tested-by: NMatteo Croce <mcroce@redhat.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 7月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
The generic VDSO implementation uses the Y2038 safe clock_gettime64() and clock_getres_time64() syscalls as fallback for 32bit VDSO. This breaks seccomp setups because these syscalls might be not (yet) allowed. Implement the 32bit variants which use the legacy syscalls and select the variant in the core library. The 64bit time variants are not removed because they are required for the time64 based vdso accessors. Fixes: 00b26474 ("lib/vdso: Provide generic VDSO implementation") Reported-by: NSean Christopherson <sean.j.christopherson@intel.com> Reported-by: NPaul Bolle <pebolle@tiscali.nl> Suggested-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lkml.kernel.org/r/20190728131648.971361611@linutronix.de
-
- 29 7月, 2019 4 次提交
-
-
由 Anders Roxell 提交于
When fall-through warnings was enabled by default the following warnings was starting to show up: ../arch/arm64/kernel/module.c: In function ‘apply_relocate_add’: ../arch/arm64/kernel/module.c:316:19: warning: this statement may fall through [-Wimplicit-fallthrough=] overflow_check = false; ~~~~~~~~~~~~~~~^~~~~~~ ../arch/arm64/kernel/module.c:317:3: note: here case R_AARCH64_MOVW_UABS_G0: ^~~~ ../arch/arm64/kernel/module.c:322:19: warning: this statement may fall through [-Wimplicit-fallthrough=] overflow_check = false; ~~~~~~~~~~~~~~~^~~~~~~ ../arch/arm64/kernel/module.c:323:3: note: here case R_AARCH64_MOVW_UABS_G1: ^~~~ Rework so that the compiler doesn't warn about fall-through. Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning") Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Anders Roxell 提交于
When fall-through warnings was enabled by default the following warning was starting to show up: In file included from ../include/linux/kernel.h:15, from ../include/linux/list.h:9, from ../include/linux/kobject.h:19, from ../include/linux/of.h:17, from ../include/linux/irqdomain.h:35, from ../include/linux/acpi.h:13, from ../arch/arm64/kernel/smp.c:9: ../arch/arm64/kernel/smp.c: In function ‘__cpu_up’: ../include/linux/printk.h:302:2: warning: this statement may fall through [-Wimplicit-fallthrough=] printk(KERN_CRIT pr_fmt(fmt), ##__VA_ARGS__) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../arch/arm64/kernel/smp.c:156:4: note: in expansion of macro ‘pr_crit’ pr_crit("CPU%u: may not have shut down cleanly\n", cpu); ^~~~~~~ ../arch/arm64/kernel/smp.c:157:3: note: here case CPU_STUCK_IN_KERNEL: ^~~~ Rework so that the compiler doesn't warn about fall-through. Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning") Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Now that -Wimplicit-fallthrough is passed to GCC by default, the kernel build has suddenly got noisy. Annotate the two fall-through cases in our hw_breakpoint implementation, since they are both intentional. Reported-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Will Deacon 提交于
Commit d968d2b8 ("ARM: 7497/1: hw_breakpoint: allow single-byte watchpoints on all addresses") changed the validation requirements for hardware watchpoints on arch/arm/. Update our compat layer to implement the same relaxation. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 25 7月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
UAPI headers licensed under GPL are supposed to have exception "WITH Linux-syscall-note" so that they can be included into non-GPL user space application code. The exception note is missing in some UAPI headers. Some of them slipped in by the treewide conversion commit b2441318 ("License cleanup: add SPDX GPL-2.0 license identifier to files with no license"). Just run: $ git show --oneline b2441318 -- arch/x86/include/uapi/asm/ I believe they are not intentional, and should be fixed too. This patch was generated by the following script: git grep -l --not -e Linux-syscall-note --and -e SPDX-License-Identifier \ -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild | while read file do sed -i -e '/[[:space:]]OR[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \ -e '/[[:space:]]or[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \ -e '/[[:space:]]OR[[:space:]]/!{/[[:space:]]or[[:space:]]/!s/\(GPL-[^[:space:]]*\)/\1 WITH Linux-syscall-note/g}' $file done After this patch is applied, there are 5 UAPI headers that do not contain "WITH Linux-syscall-note". They are kept untouched since this exception applies only to GPL variants. $ git grep --not -e Linux-syscall-note --and -e SPDX-License-Identifier \ -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild include/uapi/drm/panfrost_drm.h:/* SPDX-License-Identifier: MIT */ include/uapi/linux/batman_adv.h:/* SPDX-License-Identifier: MIT */ include/uapi/linux/qemu_fw_cfg.h:/* SPDX-License-Identifier: BSD-3-Clause */ include/uapi/linux/vbox_err.h:/* SPDX-License-Identifier: MIT */ include/uapi/linux/virtio_iommu.h:/* SPDX-License-Identifier: BSD-3-Clause */ Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 7月, 2019 2 次提交
-
-
由 Lucas Stach 提交于
The i.MX8M SAI block is not compatible with the i.MX6SX one, as the register layout has changed due to two version registers being added at the beginning of the address map. Remove the bogus compatible. Fixes: 8c61538d ("arm64: dts: imx8mq: Add SAI2 node") Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Reviewed-by: NDaniel Baluta <daniel.baluta@nxp.com> Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
由 Anson Huang 提交于
According to i.MX8MM reference manual Rev.1, 03/2019: SAI3_RXC pin's mux option #1 should be GPT1_CLK, NOT GPT1_CAPTURE2; SAI3_TXFS pin's mux option #1 should be GPT1_CAPTURE2, NOT GPT1_CLK. Fixes: c1c9d413 ("dt-bindings: imx: Add pinctrl binding doc for imx8mm") Signed-off-by: NAnson Huang <Anson.Huang@nxp.com> Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
- 22 7月, 2019 11 次提交
-
-
由 James Morse 提交于
Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP to FAR_EL1, but this is where we read from, and the address we provide to user-space with the BUS_ADRALN signal. For user-space this value will be UNKNOWN due to the previous ERET to user-space. If the last value is preserved, on systems with KASLR or KPTI this will be the user-space link-register left in FAR_EL1 by tramp_exit(). Fix this to retrieve the original sp_el0 value, and pass this to do_sp_pc_fault(). SP alignment faults from EL1 will cause us to take the fault again when trying to store the pt_regs. This eventually takes us to the overflow stack. Remove the ESR_ELx_EC_SP_ALIGN check as we will never make it this far. Fixes: 60ffc30d ("arm64: Exception handling") Signed-off-by: NJames Morse <james.morse@arm.com> [will: change label name and fleshed out comment] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
On a CPU that doesn't support SSBS, PSTATE[12] is RES0. In a system where only some of the CPUs implement SSBS, we end-up losing track of the SSBS bit across task migration. To address this issue, let's force the SSBS bit on context switch. Fixes: 8f04e8e6 ("arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [will: inverted logic and added comments] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Anshuman Khandual 提交于
This helper is required from generic huge_pte_alloc() which is available when arch subscribes ARCH_WANT_GENERAL_HUGETLB. arm64 implements it's own huge_pte_alloc() and does not depend on the generic definition. Drop this helper which is redundant on arm64. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Steve Capper <Steve.Capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Dave Martin 提交于
There are some hand-written instances of "32" to express the number of SVE Z-registers. Since this code was written a #define was added for this, so convert trivial instances of this magic number as appropriate. No functional change. Reviewed-by: NJulien Grall <julien.grall@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Dave Martin 提交于
Currently we convert from FPSIMD to SVE register state in memory in two places. To ease future maintenance, let's consolidate this in one place. Reviewed-by: NJulien Grall <julien.grall@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
The arm64 stacktrace code is careful to only dereference frame records in valid stack ranges, ensuring that a corrupted frame record won't result in a faulting access. However, it's still possible for corrupt frame records to result in infinite loops in the stacktrace code, which is also undesirable. This patch ensures that we complete a stacktrace in finite time, by keeping track of which stacks we have already completed unwinding, and verifying that if the next frame record is on the same stack, it is at a higher address. As this has turned out to be particularly subtle, comments are added to explain the procedure. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Acked-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Tengfei Fan <tengfeif@codeaurora.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Dave Martin 提交于
Some common code is required by each stacktrace user to initialise struct stackframe before the first call to unwind_frame(). In preparation for adding to the common code, this patch factors it out into a separate function start_backtrace(), and modifies the stacktrace callers appropriately. No functional change. Signed-off-by: NDave Martin <dave.martin@arm.com> [Mark: drop tsk argument, update more callsites] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Dave Martin 提交于
on_accessible_stack() and on_task_stack() shouldn't (and don't) modify their task argument, so it can be const. This patch adds the appropriate modifiers. Whitespace violations in the parameter lists are fixed at the same time. No functional change. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDave Martin <dave.martin@arm.com> [Mark: fixup const location, whitespace] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Vincenzo Frascino 提交于
The recent changes to the vdso library for arm64 and the introduction of the compat vdso library have generated some misalignment in the Makefiles. Cleanup the Makefiles for vdso and vdso32 libraries: * Removing unused rules. * Unifying the displayed compilation messages. * Simplifying the generic library inclusion path for arm64 vdso. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Naohiro Aota 提交于
Running "make" on an already compiled kernel tree will rebuild the kernel even without any modifications: $ make ARCH=arm64 CROSS_COMPILE=/usr/bin/aarch64-unknown-linux-gnu- arch/arm64/Makefile:58: CROSS_COMPILE_COMPAT not defined or empty, the compat vDSO will not be built CALL scripts/checksyscalls.sh CALL scripts/atomic/check-atomics.sh VDSOCHK arch/arm64/kernel/vdso/vdso.so.dbg VDSOSYM include/generated/vdso-offsets.h CHK include/generated/compile.h CC arch/arm64/kernel/signal.o CC arch/arm64/kernel/vdso.o CC arch/arm64/kernel/signal32.o LD arch/arm64/kernel/vdso/vdso.so.dbg OBJCOPY arch/arm64/kernel/vdso/vdso.so AS arch/arm64/kernel/vdso/vdso.o AR arch/arm64/kernel/vdso/built-in.a AR arch/arm64/kernel/built-in.a GEN .version CHK include/generated/compile.h UPD include/generated/compile.h CC init/version.o AR init/built-in.a LD vmlinux.o This is the same bug fixed in commit 92a47286 ("x86/boot: Fix if_changed build flip/flop bug"). We cannot use two "if_changed" in one target. Fix this build bug by merging two commands into one function. Fixes: a7f71a2c ("arm64: compat: Add vDSO") Fixes: 28b1a824 ("arm64: vdso: Substitute gettimeofday() with C implementation") Reviewed-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com> [will: merged in compat fix from Vincenzo and made rule names consistent] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Vincenzo Frascino 提交于
Prior to the introduction of Unified vDSO support and compat layer for vDSO on arm64, AT_SYSINFO_EHDR was not defined for compat tasks. In the current implementation, AT_SYSINFO_EHDR is defined even if the compat vdso layer is not built, which has been shown to break Android applications using bionic: | 01-01 01:22:14.097 755 755 F libc : Fatal signal 11 (SIGSEGV), | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 755 (cameraserver), | pid 755 (cameraserver) | 01-01 01:22:14.112 759 759 F libc : Fatal signal 11 (SIGSEGV), | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 759 | (android.hardwar), pid 759 (android.hardwar) | 01-01 01:22:14.120 756 756 F libc : Fatal signal 11 (SIGSEGV) | code 1 (SEGV_MAPERR), fault addr 0x3cf2c96c in tid 756 (drmserver), | pid 756 (drmserver) Restore the old behaviour by making sure that AT_SYSINFO_EHDR for compat tasks is defined only when CONFIG_COMPAT_VDSO is enabled. Reported-by: NJohn Stultz <john.stultz@linaro.org> Tested-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 7月, 2019 1 次提交
-
-
由 David Hildenbrand 提交于
We want to improve error handling while adding memory by allowing to use arch_remove_memory() and __remove_pages() even if CONFIG_MEMORY_HOTREMOVE is not set to e.g., implement something like: arch_add_memory() rc = do_something(); if (rc) { arch_remove_memory(); } We won't get rid of CONFIG_MEMORY_HOTREMOVE for now, as it will require quite some dependencies for memory offlining. Link: http://lkml.kernel.org/r/20190527111152.16324-7-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NPavel Tatashin <pasha.tatashin@soleen.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Cc: David Hildenbrand <david@redhat.com> Cc: Oscar Salvador <osalvador@suse.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Mark Brown <broonie@kernel.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Rob Herring <robh@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: "mike.travis@hpe.com" <mike.travis@hpe.com> Cc: Andrew Banman <andrew.banman@hpe.com> Cc: Arun KS <arunks@codeaurora.org> Cc: Qian Cai <cai@lca.pw> Cc: Mathieu Malaterre <malat@debian.org> Cc: Baoquan He <bhe@redhat.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chintan Pandya <cpandya@codeaurora.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Jun Yao <yaojun8558363@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-