- 18 3月, 2020 2 次提交
-
-
由 Amit Daniel Kachhap 提交于
Functions like vmap() record how much memory has been allocated by their callers, and callers are identified using __builtin_return_address(). Once the kernel is using pointer-auth the return address will be signed. This means it will not match any kernel symbol, and will vary between threads even for the same caller. The output of /proc/vmallocinfo in this case may look like, 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4 The above three 64bit values should be the same symbol name and not different LR values. Use the pre-processor to add logic to clear the PAC to __builtin_return_address() callers. This patch adds a new file asm/compiler.h and is transitively included via include/compiler_types.h on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for this purpose and added in this file. Existing macro ptrauth_user_pac_mask moved from asm/pointer_auth.h. Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps). Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then the late CPU is parked by the cpufeature framework. This is possible as kernel will only have NOP space intructions for PAC so such mismatched late cpu will silently ignore those instructions in C functions. Also, if the boot CPU does not have address auth and the late CPU has then the late cpu will still boot but with ptrauth feature disabled. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 2月, 2020 2 次提交
-
-
由 Peter Zijlstra 提交于
Towards a more consistent naming scheme. [akpm@linux-foundation.org: fix sparc64 Kconfig] Link: http://lkml.kernel.org/r/20200116064531.483522-7-aneesh.kumar@linux.ibm.comSigned-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Steven Price 提交于
Now walk_page_range() can walk kernel page tables, we can switch the arm64 ptdump code over to using it, simplifying the code. Link: http://lkml.kernel.org/r/20191218162402.45610-22-steven.price@arm.comSigned-off-by: NSteven Price <steven.price@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexandre Ghiti <alex@ghiti.fr> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Hogan <jhogan@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: "Liang, Kan" <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Zong Li <zong.li@sifive.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 1月, 2020 2 次提交
-
-
由 Will Deacon 提交于
Remove the additional space. Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Richard Henderson 提交于
Expose the ID_AA64ISAR0.RNDR field to userspace, as the RNG system registers are always available at EL0. Implement arch_get_random_seed_long using RNDR. Given that the TRNG is likely to be a shared resource between cores, and VMs, do not explicitly force re-seeding with RNDRRS. In order to avoid code complexity and potential issues with hetrogenous systems only provide values after cpufeature has finalized the system capabilities. Signed-off-by: NRichard Henderson <richard.henderson@linaro.org> [Modified to only function after cpufeature has finalized the system capabilities and move all the code into the header -- broonie] Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> [will: Advertise HWCAP via /proc/cpuinfo] Signed-off-by: NWill Deacon <will@kernel.org>
-
- 21 1月, 2020 1 次提交
-
-
由 Vladimir Murzin 提交于
arm64 provides always working implementation of futex_atomic_cmpxchg_inatomic(), so there is no need to check it runtime. Reported-by: NPiyush swami <Piyush.swami@arm.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 16 1月, 2020 4 次提交
-
-
由 Steven Price 提交于
Cortex-A55 erratum 1530923 allows TLB entries to be allocated as a result of a speculative AT instruction. This may happen in the middle of a guest world switch while the relevant VMSA configuration is in an inconsistent state, leading to erroneous content being allocated into TLBs. The same workaround as is used for Cortex-A76 erratum 1165522 (WORKAROUND_SPECULATIVE_AT_VHE) can be used here. Note that this mandates the use of VHE on affected parts. Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steven Price 提交于
To match SPECULATIVE_AT_VHE let's also have a generic name for the NVHE variant. Acked-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steven Price 提交于
Cortex-A55 is affected by a similar erratum, so rename the existing workaround for errarum 1165522 so it can be used for both errata. Acked-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Vladimir Murzin 提交于
Use the new 'as-instr' Kconfig macro to define CONFIG_BROKEN_GAS_INST directly, making it available everywhere. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> [will: Drop redundant 'y if' logic] Signed-off-by: NWill Deacon <will@kernel.org>
-
- 15 1月, 2020 2 次提交
-
-
由 Mark Brown 提交于
Kernel Page Table Isolation (KPTI) is used to mitigate some speculation based security issues by ensuring that the kernel is not mapped when userspace is running but this approach is expensive and is incompatible with SPE. E0PD, introduced in the ARMv8.5 extensions, provides an alternative to this which ensures that accesses from userspace to the kernel's half of the memory map to always fault with constant time, preventing timing attacks without requiring constant unmapping and remapping or preventing legitimate accesses. Currently this feature will only be enabled if all CPUs in the system support E0PD, if some CPUs do not support the feature at boot time then the feature will not be enabled and in the unlikely event that a late CPU is the first CPU to lack the feature then we will reject that CPU. This initial patch does not yet integrate with KPTI, this will be dealt with in followup patches. Ideally we could ensure that by default we don't use KPTI on CPUs where E0PD is present. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: Fixed typo in Kconfig text] Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Catalin Marinas 提交于
As the Kconfig syntax gained support for $(as-instr) tests, move the LSE gas support detection from Makefile to the main arm64 Kconfig and remove the additional CONFIG_AS_LSE definition and check. Cc: Will Deacon <will@kernel.org> Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com> Tested-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 09 1月, 2020 1 次提交
-
-
由 Joe Perches 提交于
Remove the CONFIG_ prefix from the select statement for ARM_GIC_V3. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 07 1月, 2020 1 次提交
-
-
由 Amanieu d'Antras 提交于
This is required for clone3 which passes the TLS value through a struct rather than a register. Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: <stable@vger.kernel.org> # 5.3.x Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200102172413.654385-3-amanieu@gmail.comSigned-off-by: NChristian Brauner <christian.brauner@ubuntu.com>
-
- 13 12月, 2019 1 次提交
-
-
由 Shile Zhang 提交于
Use a more generic name for additional table sorting usecases, such as the upcoming ORC table sorting feature. This tool is not tied to exception table sorting anymore. No functional changes intended. [ mingo: Rewrote the changelog. ] Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: linux-kbuild@vger.kernel.org Link: https://lkml.kernel.org/r/20191204004633.88660-6-shile.zhang@linux.alibaba.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 12月, 2019 1 次提交
-
-
由 Daniel Borkmann 提交于
After Spectre 2 fix via 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON config") most major distros use BPF_JIT_ALWAYS_ON configuration these days which compiles out the BPF interpreter entirely and always enables the JIT. Also given recent fix in e1608f3f ("bpf: Avoid setting bpf insns pages read-only when prog is jited"), we additionally avoid fragmenting the direct map for the BPF insns pages sitting in the general data heap since they are not used during execution. Latter is only needed when run through the interpreter. Since both x86 and arm64 JITs have seen a lot of exposure over the years, are generally most up to date and maintained, there is more downside in !BPF_JIT_ALWAYS_ON configurations to have the interpreter enabled by default rather than the JIT. Add a ARCH_WANT_DEFAULT_BPF_JIT config which archs can use to set the bpf_jit_{enable,kallsyms} to 1. Back in the days the bpf_jit_kallsyms knob was set to 0 by default since major distros still had /proc/kallsyms addresses exposed to unprivileged user space which is not the case anymore. Hence both knobs are set via BPF_JIT_DEFAULT_ON which is set to 'y' in case of BPF_JIT_ALWAYS_ON or ARCH_WANT_DEFAULT_BPF_JIT. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NMartin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/f78ad24795c2966efcc2ee19025fa3459f622185.1575903816.git.daniel@iogearbox.net
-
- 08 12月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the Kconfig dependency, entry code and preemption handling over to use CONFIG_PREEMPTION. Add PREEMPT_RT output in show_stack(). [bigeasy: +traps.c, Kconfig] Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20191015191821.11479-3-bigeasy@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 11月, 2019 1 次提交
-
-
由 Will Deacon 提交于
The generic implementation of refcount_t should be good enough for everybody, so remove ARCH_HAS_REFCOUNT and REFCOUNT_FULL entirely, leaving the generic implementation enabled unconditionally. Signed-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NKees Cook <keescook@chromium.org> Tested-by: NHanjun Guo <guohanjun@huawei.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Elena Reshetova <elena.reshetova@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20191121115902.2551-9-will@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 11月, 2019 1 次提交
-
-
由 Ard Biesheuvel 提交于
In order to use 128-bit integer arithmetic in C code, the architecture needs to have declared support for it by setting ARCH_SUPPORTS_INT128, and it requires a version of the toolchain that supports this at build time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test whether __SIZEOF_INT128__ is defined, since this is only the case for compilers that can support 128-bit integers. Let's fold this additional test into the Kconfig declaration of ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles, e.g., to decide whether a certain object needs to be included in the first place. Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 14 11月, 2019 1 次提交
-
-
由 Anders Roxell 提交于
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CPU_BIG_ENDIAN gets enabled. Which tends not to be what most people want. Another concern that has come up is that ACPI isn't built for an allmodconfig kernel today since that also depends on !CPU_BIG_ENDIAN. Rework so that we introduce a 'choice' and default the choice to CPU_LITTLE_ENDIAN. That means that when we build an allmodconfig kernel it will default to CPU_LITTLE_ENDIAN that most people tends to want. Reviewed-by: NJohn Garry <john.garry@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 11月, 2019 1 次提交
-
-
由 Anders Roxell 提交于
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CMDLINE_FORCE gets enabled. Which forces the user to pass the full cmdline to CONFIG_CMDLINE="...". Rework so that CONFIG_CMDLINE_FORCE gets set only if CONFIG_CMDLINE is set to something except an empty string. Suggested-by: NJohn Garry <john.garry@huawei.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 11月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
For dma-direct we know that the DMA address is an encoding of the physical address that we can trivially decode. Use that fact to provide implementations that do not need the arch_dma_coherent_to_pfn architecture hook. Note that we still can only support mmap of non-coherent memory only if the architecture provides a way to set an uncached bit in the page tables. This must be true for architectures that use the generic remap helpers, but other architectures can also manually select it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMax Filippov <jcmvbkbc@gmail.com>
-
- 06 11月, 2019 1 次提交
-
-
由 Torsten Duwe 提交于
This patch implements FTRACE_WITH_REGS for arm64, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and/or modified. This is a building block for live-patching, where a function's arguments may be forwarded to another function. This is also necessary to enable ftrace and in-kernel pointer authentication at the same time, as it allows the LR value to be captured and adjusted prior to signing. Using GCC's -fpatchable-function-entry=N option, we can have the compiler insert a configurable number of NOPs between the function entry point and the usual prologue. This also ensures functions are AAPCS compliant (e.g. disabling inter-procedural register allocation). For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the following: | unsigned long bar(void); | | unsigned long foo(void) | { | return bar() + 1; | } ... to: | <foo>: | nop | nop | stp x29, x30, [sp, #-16]! | mov x29, sp | bl 0 <bar> | add x0, x0, #0x1 | ldp x29, x30, [sp], #16 | ret This patch builds the kernel with -fpatchable-function-entry=2, prefixing each function with two NOPs. To trace a function, we replace these NOPs with a sequence that saves the LR into a GPR, then calls an ftrace entry assembly function which saves this and other relevant registers: | mov x9, x30 | bl <ftrace-entry> Since patchable functions are AAPCS compliant (and the kernel does not use x18 as a platform register), x9-x18 can be safely clobbered in the patched sequence and the ftrace entry code. There are now two ftrace entry functions, ftrace_regs_entry (which saves all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is allocated for each within modules. Signed-off-by: NTorsten Duwe <duwe@suse.de> [Mark: rework asm, comments, PLTs, initialization, commit message] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Julien Thierry <jthierry@redhat.com> Cc: Will Deacon <will@kernel.org>
-
- 26 10月, 2019 2 次提交
-
-
由 Marc Zyngier 提交于
Now that everything is in place, let's get the ball rolling by allowing the corresponding config option to be selected. Also add the required information to silicon_errata.rst. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 James Morse 提交于
Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 10月, 2019 1 次提交
-
-
由 Nicolas Saenz Julienne 提交于
So far all arm64 devices have supported 32 bit DMA masks for their peripherals. This is not true anymore for the Raspberry Pi 4 as most of it's peripherals can only address the first GB of memory on a total of up to 4 GB. This goes against ZONE_DMA32's intent, as it's expected for ZONE_DMA32 to be addressable with a 32 bit mask. So it was decided to re-introduce ZONE_DMA in arm64. ZONE_DMA will contain the lower 1G of memory, which is currently the memory area addressable by any peripheral on an arm64 device. ZONE_DMA32 will contain the rest of the 32 bit addressable memory. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 10月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
Allow the user to select the workaround for TX2-219, and update the silicon-errata.rst file to reflect this. Cc: <stable@vger.kernel.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 07 10月, 2019 2 次提交
-
-
由 Will Deacon 提交于
CONFIG_COMPAT_VDSO is defined by passing '-DCONFIG_COMPAT_VDSO' to the compiler when the generic compat vDSO code is in use. It's much cleaner and simpler to expose this as a proper Kconfig option (like x86 does), so do that and remove the bodge. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Vincenzo Frascino 提交于
The .config file and the generated include/config/auto.conf can end up out of sync after a set of commands since CONFIG_CROSS_COMPILE_COMPAT_VDSO is not updated correctly. The sequence can be reproduced as follows: $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig [...] $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- menuconfig [set CONFIG_CROSS_COMPILE_COMPAT_VDSO="arm-linux-gnueabihf-"] $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Which results in: arch/arm64/Makefile:62: CROSS_COMPILE_COMPAT not defined or empty, the compat vDSO will not be built even though the compat vDSO has been built: $ file arch/arm64/kernel/vdso32/vdso.so arch/arm64/kernel/vdso32/vdso.so: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV), dynamically linked, BuildID[sha1]=c67f6c786f2d2d6f86c71f708595594aa25247f6, stripped A similar case that involves changing the configuration parameter multiple times can be reconducted to the same family of problems. Remove the use of CONFIG_CROSS_COMPILE_COMPAT_VDSO altogether and instead rely on the cross-compiler prefix coming from the environment via CROSS_COMPILE_COMPAT, much like we do for the rest of the kernel. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 25 9月, 2019 2 次提交
-
-
由 Alexandre Ghiti 提交于
This commits selects ARCH_HAS_ELF_RANDOMIZE when an arch uses the generic topdown mmap layout functions so that this security feature is on by default. Note that this commit also removes the possibility for arm64 to have elf randomization and no MMU: without MMU, the security added by randomization is worth nothing. Link: http://lkml.kernel.org/r/20190730055113.23635-6-alex@ghiti.frSigned-off-by: NAlexandre Ghiti <alex@ghiti.fr> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@infradead.org> Cc: James Hogan <jhogan@kernel.org> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexandre Ghiti 提交于
arm64 handles top-down mmap layout in a way that can be easily reused by other architectures, so make it available in mm. It then introduces a new config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT that can be set by other architectures to benefit from those functions. Note that this new config depends on MMU being enabled, if selected without MMU support, a warning will be thrown. Link: http://lkml.kernel.org/r/20190730055113.23635-5-alex@ghiti.frSigned-off-by: NAlexandre Ghiti <alex@ghiti.fr> Suggested-by: NChristoph Hellwig <hch@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: James Hogan <jhogan@kernel.org> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 9月, 2019 1 次提交
-
-
由 Jeremy Cline 提交于
The referenced file does not exist, but tagged-address-abi.rst does. Signed-off-by: NJeremy Cline <jcline@redhat.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 30 8月, 2019 1 次提交
-
-
由 Will Deacon 提交于
Support for LSE atomic instructions (CONFIG_ARM64_LSE_ATOMICS) relies on a static key to select between the legacy LL/SC implementation which is available on all arm64 CPUs and the super-duper LSE implementation which is available on CPUs implementing v8.1 and later. Unfortunately, when building a kernel with CONFIG_JUMP_LABEL disabled (e.g. because the toolchain doesn't support 'asm goto'), the static key inside the atomics code tries to use atomics itself. This results in a mess of circular includes and a build failure: In file included from ./arch/arm64/include/asm/lse.h:11, from ./arch/arm64/include/asm/atomic.h:16, from ./include/linux/atomic.h:7, from ./include/asm-generic/bitops/atomic.h:5, from ./arch/arm64/include/asm/bitops.h:26, from ./include/linux/bitops.h:19, from ./include/linux/kernel.h:12, from ./include/asm-generic/bug.h:18, from ./arch/arm64/include/asm/bug.h:26, from ./include/linux/bug.h:5, from ./include/linux/page-flags.h:10, from kernel/bounds.c:10: ./include/linux/jump_label.h: In function ‘static_key_count’: ./include/linux/jump_label.h:254:9: error: implicit declaration of function ‘atomic_read’ [-Werror=implicit-function-declaration] return atomic_read(&key->enabled); ^~~~~~~~~~~ [ ... more of the same ... ] Since LSE atomic instructions are not critical to the operation of the kernel, make them depend on JUMP_LABEL at compile time. Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 29 8月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
arch_dma_mmap_pgprot is used for two things: 1) to override the "normal" uncached page attributes for mapping memory coherent to devices that can't snoop the CPU caches 2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older arm systems and some mips platforms Replace one with the pgprot_dmacoherent macro that is already provided by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE handling to common code with an explicit arch opt-in. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Acked-by: Paul Burton <paul.burton@mips.com> # mips
-
- 22 8月, 2019 1 次提交
-
-
由 Masahiro Yamada 提交于
Add CONFIG_ASM_MODVERSIONS. This allows to remove one if-conditional nesting in scripts/Makefile.build. scripts/Makefile.build is run every time Kbuild descends into a sub-directory. So, I want to avoid $(wildcard ...) evaluation where possible although computing $(wildcard ...) is so cheap that it may not make measurable performance difference. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
- 20 8月, 2019 1 次提交
-
-
由 Jiri Bohac 提交于
This is a preparatory patch for kexec_file_load() lockdown. A locked down kernel needs to prevent unsigned kernel images from being loaded with kexec_file_load(). Currently, the only way to force the signature verification is compiling with KEXEC_VERIFY_SIG. This prevents loading usigned images even when the kernel is not locked down at runtime. This patch splits KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE. Analogous to the MODULE_SIG and MODULE_SIG_FORCE for modules, KEXEC_SIG turns on the signature verification but allows unsigned images to be loaded. KEXEC_SIG_FORCE disallows images without a valid signature. Signed-off-by: NJiri Bohac <jbohac@suse.cz> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NMatthew Garrett <mjg59@google.com> cc: kexec@lists.infradead.org Signed-off-by: NJames Morris <jmorris@namei.org>
-
- 09 8月, 2019 2 次提交
-
-
由 Steve Capper 提交于
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command line argument and affects the codegen of the inline address sanetiser. Essentially, for an example memory access: *ptr1 = val; The compiler will insert logic similar to the below: shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET) if (somethingWrong(shadowValue)) flagAnError(); This code sequence is inserted into many places, thus KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel text. If we want to run a single kernel binary with multiple address spaces, then we need to do this with KASAN_SHADOW_OFFSET fixed. Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide shadow addresses we know that the end of the shadow region is constant w.r.t. VA space size: KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET This means that if we increase the size of the VA space, the start of the KASAN region expands into lower addresses whilst the end of the KASAN region is fixed. Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via build scripts with the VA size used as a parameter. (There are build time checks in the C code too to ensure that expected values are being derived). It is sufficient, and indeed is a simplification, to remove the build scripts (and build time checks) entirely and instead provide KASAN_SHADOW_OFFSET values. This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the arm64 Makefile, and instead we adopt the approach used by x86 to supply offset values in kConfig. To help debug/develop future VA space changes, the Makefile logic has been preserved in a script file in the arm64 Documentation folder. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 07 8月, 2019 1 次提交
-
-
由 Leo Yan 提交于
Inspired by the commit 7cd01b08 ("powerpc: Add support for function error injection"), this patch supports function error injection for Arm64. This patch mainly support two functions: one is regs_set_return_value() which is used to overwrite the return value; the another function is override_function_with_return() which is to override the probed function returning and jump to its caller. Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NLeo Yan <leo.yan@linaro.org> Signed-off-by: NWill Deacon <will@kernel.org>
-