- 02 9月, 2020 7 次提交
-
-
由 James Morse 提交于
task #28924046 [ Upstream commit 05460849c3b5 ] Cores affected by Neoverse-N1 #1542419 could execute a stale instruction when a branch is updated to point to freshly generated instructions. To workaround this issue we need user-space to issue unnecessary icache maintenance that we can trap. Start by hiding CTR_EL0.DIC. Backport change: Modify the ARM64_WORKAROUND_1542419 from 45 to 37 Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 Suzuki K Poulose 提交于
task #28924046 [ Upstream commit 4afe8e79da92 ] When there is a mismatch in the CTR_EL0 field, we trap access to CTR from EL0 on all CPUs to expose the safe value. However, we could skip trapping on a CPU which matches the safe value. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 James Morse 提交于
task #28924046 [ Upstream commit 3276cc248964 ] Neoverse-N1 affected by #1349291 may report an Uncontained RAS Error as Unrecoverable. The kernel's architecture code already considers Unrecoverable errors as fatal as without kernel-first support no further error-handling is possible. Now that KVM attributes SError to the host/guest more precisely the host's architecture code will always handle host errors that become pending during world-switch. Errors misclassified by this errata that affected the guest will be re-injected to the guest as an implementation-defined SError, which can be uncontained. Until kernel-first support is implemented, no workaround is needed for this issue. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 Marc Zyngier 提交于
task #28924046 [ Upstream commit a5325089bd05 ] We already mitigate erratum 1188873 affecting Cortex-A76 and Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to r3p1 of the same cores are affected by erratum 1418040, which has the same workaround as 1188873. Let's expand the range of affected revisions to match 1418040, and repaint all occurences of 1188873 to 1418040. Whilst we're there, do a bit of reformating in silicon-errata.txt and drop a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 Marc Zyngier 提交于
task #28924046 [Upstream commit 0f80cad3124f986d0e46c14d46b8da06d87a2bf4] We currently deal with ARM64_ERRATUM_1188873 by always trapping EL0 accesses for both instruction sets. Although nothing wrong comes out of that, people trying to squeeze the last drop of performance from buggy HW find this over the top. Oh well. Let's change the mitigation by flipping the counter enable bit on return to userspace. Non-broken HW gets an extra branch on the fast path, which is hopefully not the end of the world. The arch timer workaround is also removed. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 Marc Zyngier 提交于
task #28924046 [ Upstream commit 6989303a3b2d864fd8e17d3fa3365d3e9649a598 ] Neoverse-N1 is also affected by ARM64_ERRATUM_1188873, so let's add it to the list of affected CPUs. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [will: Update silicon-errata.txt] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
由 Marc Zyngier 提交于
task #28924046 [ Upstream commit 95b861a4a6d94f64d5242605569218160ebacdbe ] When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Backport change: The patch modifies ARM64_WORKAROUND_1188873 from 35 to 36 and the ARM_CPU_PART_CORTEX_A76 is deleted because a previous patch has been modified. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NBin Yu <jkchen@linux.alibaba.com> Reviewed-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Nzou cao <zoucao@linux.alibaba.com>
-
- 29 6月, 2020 2 次提交
-
-
由 James Morse 提交于
fix #28612342 commit 8fcc4ae6faf8b455eeef00bc9ae70744e3b0f462 upstream APEI is unable to do all of its error handling work in nmi-context, so it defers non-fatal work onto the irq_work queue. arch_irq_work_raise() sends an IPI to the calling cpu, but this is not guaranteed to be taken before returning to user-space. Unless the exception interrupted a context with irqs-masked, irq_work_run() can run immediately. Otherwise return -EINPROGRESS to indicate ghes_notify_sea() found some work to do, but it hasn't finished yet. With this apei_claim_sea() returning '0' means this external-abort was also notification of a firmware-first RAS error, and that APEI has processed the CPER records. Signed-off-by: NJames Morse <james.morse@arm.com> Tested-by: NTyler Baicar <baicar@os.amperecomputing.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: NAlex Shi <alex.shi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 James Morse 提交于
fix #28612342 commit d44f1b8dd7e66d80cc4205809e5ace866bd851da upstream To split up APEIs in_nmi() path, the caller needs to always be in_nmi(). Add a helper to do the work and claim the notification. When KVM or the arch code takes an exception that might be a RAS notification, it asks the APEI firmware-first code whether it wants to claim the exception. A future kernel-first mechanism may be queried afterwards, and claim the notification, otherwise we fall through to the existing default behaviour. The NOTIFY_SEA code was merged before considering multiple, possibly interacting, NMI-like notifications and the need to consider kernel first in the future. Make the 'claiming' behaviour explicit. Restructuring the APEI code to allow multiple NMI-like notifications means any notification that might interrupt interrupts-masked code must always be wrapped in nmi_enter()/nmi_exit(). This will allow APEI to use in_nmi() to use the right fixmap entries. Mask SError over this window to prevent an asynchronous RAS error arriving and tripping 'nmi_enter()'s BUG_ON(in_nmi()). Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NTyler Baicar <tbaicar@codeaurora.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NBaolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: NAlex Shi <alex.shi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
- 28 4月, 2020 1 次提交
-
-
由 Rongwei Wang 提交于
to #26730415 commit 86d0dd34eafffbc76a81aba6ae2d71927d3835a8 upstream. Add a CRC32 feature bit and wire it up to the CPU id register so we will be able to use alternatives patching for CRC32 operations. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [ Rongwei: fixed conflicts ] Signed-off-by: NRongwei Wang <rongwei.wang@linux.alibaba.com> Acked-by: NZou Cao <zoucao@linux.alibaba.com>
-
- 22 4月, 2020 1 次提交
-
-
由 Ard Biesheuvel 提交于
fix #26081752 commit 0a1213fa7432778b71a1c0166bf56660a3aab030 upstream This enables the use of per-task stack canary values if GCC has support for emitting the stack canary reference relative to the value of sp_el0, which holds the task struct pointer in the arm64 kernel. The $(eval) extends KBUILD_CFLAGS at the moment the make rule is applied, which means asm-offsets.o (which we rely on for the offset value) is built without the arguments, and everything built afterwards has the options set. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Nluanshi <zhangliguang@linux.alibaba.com> Reviewed-by: Jia Zhang <zhang.jia@linux.alibaba.com> Acked-by: NZou Cao <zoucao@linux.alibaba.com>
-
- 18 3月, 2020 14 次提交
-
-
由 Zou Cao 提交于
We don't need to use kthread_return_to_user to tell unwind it is kernel thread, we can use __kernel_text_address, it is a normal way in other arch like x86/ppc. Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Torsten Duwe 提交于
cherry-picked from: https://patchwork.kernel.org/patch/10657429/ Enhance the stack unwinder so that it reports whether it had to stop normally or due to an error condition; unwind_frame() will report continue/error/normal ending and walk_stackframe() will pass that info. __save_stack_trace() is used to check the validity of a stack; save_stack_trace_tsk_reliable() can now trivially be implemented. Modify arch/arm64/kernel/time.c as the only external caller so far to recognise the new semantics. I had to introduce a marker symbol kthread_return_to_user to tell the normal origin of a kernel thread. Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Zou Cao 提交于
Now we support FTRACE_WITH_REGS with -fpatchable-function-entry, here enable the livepatch support depend on FTRACE_WITH_REGS. Use task flag bit 6 to track patch transisiton state for the consistency model. Add it to the work mask so it gets cleared on all kernel exits to userland. Tell livepatch regs->pc + 2*AARCH64_INSN_SIZE is the place to change the return address. these codes have a big change against reference link, beacause we use new gcc featrue. References: https://patchwork.kernel.org/patch/10657431/ Based-on-code-from: Torsten Duwe <duwe@suse.de> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit 70927d02d409b5a79c3ed040ace5017da8284ede upstream. When I tweaked the ftrace entry assembly in commit: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") ... my ifdeffery tweaks left ftrace_graph_caller undefined for CONFIG_DYNAMIC_FTRACE && CONFIG_FUNCTION_GRAPH_TRACER when ftrace is based on mcount. The kbuild test robot reported that this issue is detected at link time: | arch/arm64/kernel/entry-ftrace.o: In function `skip_ftrace_call': | arch/arm64/kernel/entry-ftrace.S:238: undefined reference to `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:238:(.text+0x3c): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol | `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:243: undefined reference to `ftrace_graph_caller' | arch/arm64/kernel/entry-ftrace.S:243:(.text+0x54): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol | `ftrace_graph_caller' This patch fixes the ifdeffery so that the mcount version of ftrace_graph_caller doesn't depend on CONFIG_DYNAMIC_FTRACE. At the same time, a redundant #else is removed from the ifdeffery for the patchable-function-entry version of ftrace_graph_caller. Fixes: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Torsten Duwe <duwe@lst.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Zou Cao 提交于
fixed warnging as follow: arm64ksyms.c:(___ksymtab+_mcount+0x0): undefined reference to `_mcount' Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit 7dc48bf96aa0fc8aa5b38cc3e5c36ac03171e680 upstream. The core ftrace hooks take the instrumented PC in x0, but for some reason arm64's prepare_ftrace_return() takes this in x1. For consistency, let's flip the argument order and always pass the instrumented PC in x0. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Torsten Duwe <duwe@suse.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit 49e258e05e8e56d53af20be481b311c43d7c286b upstream. The save_return_regs and restore_return_regs macros are only used by return_to_handler, and having them defined out-of-line only serves to obscure the logic. Before we complicate, let's clean this up and fold the logic directly into return_to_handler, saving a few lines of macro boilerplate in the process. At the same time, a missing trailing space is added to the comments, fixing a code style violation. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Torsten Duwe <duwe@suse.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Torsten Duwe 提交于
commit 3b23e4991fb66f6d152f9055ede271a726ef9f21 upstream This patch implements FTRACE_WITH_REGS for arm64, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and/or modified. This is a building block for live-patching, where a function's arguments may be forwarded to another function. This is also necessary to enable ftrace and in-kernel pointer authentication at the same time, as it allows the LR value to be captured and adjusted prior to signing. Using GCC's -fpatchable-function-entry=N option, we can have the compiler insert a configurable number of NOPs between the function entry point and the usual prologue. This also ensures functions are AAPCS compliant (e.g. disabling inter-procedural register allocation). For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the following: | unsigned long bar(void); | | unsigned long foo(void) | { | return bar() + 1; | } ... to: | <foo>: | nop | nop | stp x29, x30, [sp, #-16]! | mov x29, sp | bl 0 <bar> | add x0, x0, #0x1 | ldp x29, x30, [sp], #16 | ret This patch builds the kernel with -fpatchable-function-entry=2, prefixing each function with two NOPs. To trace a function, we replace these NOPs with a sequence that saves the LR into a GPR, then calls an ftrace entry assembly function which saves this and other relevant registers: | mov x9, x30 | bl <ftrace-entry> Since patchable functions are AAPCS compliant (and the kernel does not use x18 as a platform register), x9-x18 can be safely clobbered in the patched sequence and the ftrace entry code. There are now two ftrace entry functions, ftrace_regs_entry (which saves all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is allocated for each within modules. Signed-off-by: NTorsten Duwe <duwe@suse.de> [Mark: rework asm, comments, PLTs, initialization, commit message] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Julien Thierry <jthierry@redhat.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit 1f377e043b3b8ef68caffe47bdad794f4e2cb030 upstream So that assembly code can more easily manipulate the FP (x29) within a pt_regs, add an S_FP asm-offsets definition. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit e3bf8a67f759b498e09999804c3837688e03b304 upstream For FTRACE_WITH_REGS, we're going to want to generate a MOV (register) instruction as part of the callsite intialization. As MOV (register) is an alias for ORR (shifted register), we can generate this with aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and difficult to read in-context. Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can write callers in a more straightforward way. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit f1a54ae9af0da4d76239256ed640a93ab3aadac0 upstream Currently we lazily-initialize a module's ftrace PLT at runtime when we install the first ftrace call. To do so we have to apply a number of sanity checks, transiently mark the module text as RW, and perform an IPI as part of handling Neoverse-N1 erratum #1542419. We only expect the ftrace trampoline to point at ftrace_caller() (AKA FTRACE_ADDR), so let's simplify all of this by intializing the PLT at module load time, before the module loader marks the module RO and performs the intial I-cache maintenance for the module. Thus we can rely on the module having been correctly intialized, and can simplify the runtime work necessary to install an ftrace call in a module. This will also allow for the removal of module_disable_ro(). Tested by forcing ftrace_make_call() to use the module PLT, and then loading up a module after setting up ftrace with: | echo ":mod:<module-name>" > set_ftrace_filter; | echo function > current_tracer; | modprobe <module-name> Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is selected, we wrap its use along with most of module_init_ftrace_plt() with ifdeffery rather than using IS_ENABLED(). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit bd8b21d3dd661658addc1cd4cc869bab11d28596 upstream When we load a module, we have to perform some special work for a couple of named sections. To do this, we iterate over all of the module's sections, and perform work for each section we recognize. To make it easier to handle the unexpected absence of a section, and to make the section-specific logic easer to read, let's factor the section search into a helper. Similar is already done in the core module loader, and other architectures (and ideally we'd unify these in future). If we expect a module to have an ftrace trampoline section, but it doesn't have one, we'll now reject loading the module. When ARM64_MODULE_PLTS is selected, any correctly built module should have one (and this is assumed by arm64's ftrace PLT code) and the absence of such a section implies something has gone wrong at build time. Subsequent patches will make use of the new helper. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NTorsten Duwe <duwe@suse.de> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Tested-by: NTorsten Duwe <duwe@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Torsten Duwe 提交于
commit edf072d36dbfdf74465b66988f30084b6c996fbf upstream. In preparation for arm64 supporting ftrace built on other compiler options, let's have the arm64 Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these may be, rather than assuming '-pg'. There should be no functional change as a result of this patch. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Mark Rutland 提交于
commit e4fe196642678565766815d99ab98a3a32d72dd4 upstream. The global exports of ftrace_call and ftrace_graph_call are somewhat painful to read. Let's use the generic GLOBAL() macro to ameliorate matters. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Torsten Duwe <duwe@suse.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
- 15 1月, 2020 1 次提交
-
-
由 Keith Busch 提交于
commit 60574d1e05b094d222162260dd9cac49f4d0996a upstream. Parsing entries in an ACPI table had assumed a generic header structure. There is no standard ACPI header, though, so less common layouts with different field sizes required custom parsers to go through their subtable entry list. Create the infrastructure for adding different table types so parsing the entries array may be more reused for all ACPI system tables and the common code doesn't need to be duplicated. Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Tested-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NKeith Busch <keith.busch@intel.com> Tested-by: NBrice Goglin <Brice.Goglin@inria.fr> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NFan Du <fan.du@intel.com> Signed-off-by: NJeffle Xu <jefflexu@linux.alibaba.com> Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
- 21 12月, 2019 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
This reverts commit 64694b27 which is commit 7faa313f05cad184e8b17750f0cbe5216ac6debb upstream. Turns out one of the pre-requsite patches wasn't in 4.19.y, so this patch didn't make sense. So let's revert it. Reported-by: NSteven Rostedt <rostedt@goodmis.org> Reported-by: NWill Deacon <will@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Kevin Hilman <khilman@baylibre.com> Cc: Sasha Levin <sashal@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 05 12月, 2019 3 次提交
-
-
由 Will Deacon 提交于
[ Upstream commit 7faa313f05cad184e8b17750f0cbe5216ac6debb ] Commit 396244692232 ("arm64: preempt: Provide our own implementation of asm/preempt.h") extended the preempt count field in struct thread_info to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag indicating whether or not the current task needs rescheduling. Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point to this new field, the assembly usage was left untouched meaning that a 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns the reschedule flag instead of the count. Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, we're actually better off reworking the two assembly users so that they operate on the whole 64-bit value in favour of inspecting the thread flags separately in order to determine whether a reschedule is needed. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: N"kernelci.org bot" <bot@kernelci.org> Tested-by: NKevin Hilman <khilman@baylibre.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Suzuki K Poulose 提交于
[ Upstream commit f357b3a7e17af7736d67d8267edc1ed3d1dd9391 ] The __cpu_up() routine ignores the errors reported by the firmware for a CPU bringup operation and looks for the error status set by the booting CPU. If the CPU never entered the kernel, we could end up in assuming stale error status, which otherwise would have been set/cleared appropriately by the booting CPU. Reported-by: NSteve Capper <steve.capper@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Steve Capper 提交于
[ Upstream commit a96a33b1ca57dbea4285893dedf290aeb8eb090b ] For cases where there is a mismatch in ARMv8.2-LVA support between CPUs we have to be careful in allowing secondary CPUs to boot if 52-bit virtual addresses have already been enabled on the boot CPU. This patch adds code to the secondary startup path. If the boot CPU has enabled 52-bit VAs then ID_AA64MMFR2_EL1 is checked to see if the secondary can also enable 52-bit support. If not, the secondary is prevented from booting and an error message is displayed indicating why. Technically this patch could be implemented using the cpufeature code when considering 52-bit userspace support. However, we employ low level checks here as the cpufeature code won't be able to run if we have mismatched 52-bit kernel va support. Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 01 12月, 2019 1 次提交
-
-
由 Andrey Ryabinin 提交于
[ Upstream commit 19a2ca0fb560fd7be7b5293c6b652c6d6078dcde ] ARM64 has asm implementation of memchr(), memcmp(), str[r]chr(), str[n]cmp(), str[n]len(). KASAN don't see memory accesses in asm code, thus it can potentially miss many bugs. Ifdef out __HAVE_ARCH_* defines of these functions when KASAN is enabled, so the generic implementations from lib/string.c will be used. We can't just remove the asm functions because efistub uses them. And we can't have two non-weak functions either, so declare the asm functions as weak. Link: http://lkml.kernel.org/r/20180920135631.23833-2-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reported-by: NKyeongdon Kim <kyeongdon.kim@lge.com> Cc: Alexander Potapenko <glider@google.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 21 11月, 2019 1 次提交
-
-
由 Hari Vyas 提交于
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ] The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic(). Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case. Signed-off-by: NHari Vyas <hari.vyas@broadcom.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 06 11月, 2019 3 次提交
-
-
由 Yunfeng Ye 提交于
[ Upstream commit 3e7c93bd04edfb0cae7dad1215544c9350254b8f ] There are no return value checking when using kzalloc() and kcalloc() for memory allocation. so add it. Signed-off-by: NYunfeng Ye <yeyunfeng@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 James Morse 提交于
[ Upstream commit dd8a1f13488438c6c220b7cafa500baaf21a6e53 ] CPUs affected by Neoverse-N1 #1542419 may execute a stale instruction if it was recently modified. The affected sequence requires freshly written instructions to be executable before a branch to them is updated. There are very few places in the kernel that modify executable text, all but one come with sufficient synchronisation: * The module loader's flush_module_icache() calls flush_icache_range(), which does a kick_all_cpus_sync() * bpf_int_jit_compile() calls flush_icache_range(). * Kprobes calls aarch64_insn_patch_text(), which does its work in stop_machine(). * static keys and ftrace both patch between nops and branches to existing kernel code (not generated code). The affected sequence is the interaction between ftrace and modules. The module PLT is cleaned using __flush_icache_range() as the trampoline shouldn't be executable until we update the branch to it. Drop the double-underscore so that this path runs kick_all_cpus_sync() too. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Hanjun Guo 提交于
[ Upstream commit 0ecc471a2cb7d4d386089445a727f47b59dc9b6e ] HiSilicon Taishan v110 CPUs didn't implement CSV3 field of the ID_AA64PFR0_EL1 and are not susceptible to Meltdown, so whitelist the MIDR in kpti_safe_list[] table. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NZhangshaokun <zhangshaokun@hisilicon.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 29 10月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
commit 93916beb70143c46bf1d2bacf814be3a124b253b upstream. It appears that the only case where we need to apply the TX2_219_TVM mitigation is when the core is in SMT mode. So let's condition the enabling on detecting a CPU whose MPIDR_EL1.Aff0 is non-zero. Cc: <stable@vger.kernel.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 10月, 2019 2 次提交
-
-
由 Masayoshi Mizuma 提交于
commit 4585fc59c0e813188d6a4c5de1f6976fce461fc2 upstream. The system which has SVE feature crashed because of the memory pointed by task->thread.sve_state was destroyed by someone. That is because sve_state is freed while the forking the child process. The child process has the pointer of sve_state which is same as the parent's because the child's task_struct is copied from the parent's one. If the copy_process() fails as an error on somewhere, for example, copy_creds(), then the sve_state is freed even if the parent is alive. The flow is as follows. copy_process p = dup_task_struct => arch_dup_task_struct *dst = *src; // copy the entire region. : retval = copy_creds if (retval < 0) goto bad_fork_free; : bad_fork_free: ... delayed_free_task(p); => free_task => arch_release_task_struct => fpsimd_release_task => __sve_free => kfree(task->thread.sve_state); // free the parent's sve_state Move child's sve_state = NULL and clearing TIF_SVE flag to arch_dup_task_struct() so that the child doesn't free the parent's one. There is no need to wait until copy_process() to clear TIF_SVE for dst, because the thread flags for dst are initialized already by copying the src task_struct. This change simplifies the code, so get rid of comments that are no longer needed. As a note, arm64 used to have thread_info on the stack. So it would not be possible to clear TIF_SVE until the stack is initialized. From commit c02433dd ("arm64: split thread_info from task stack"), the thread_info is part of the task, so it should be valid to modify the flag from arch_dup_task_struct(). Cc: stable@vger.kernel.org # 4.15.x- Fixes: bc0ee476 ("arm64/sve: Core task context handling") Signed-off-by: NMasayoshi Mizuma <m.mizuma@jp.fujitsu.com> Reported-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Suggested-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Tested-by: NJulien Grall <julien.grall@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jeremy Linton 提交于
Commit 98dc19902a0b2e5348e43d6a2c39a0a7d0fc639e upstream. ACPI 6.3 adds a thread flag to represent if a CPU/PE is actually a thread. Given that the MPIDR_MT bit may not represent this information consistently on homogeneous machines we should prefer the PPTT flag if its available. Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Reviewed-by: NRobert Richter <rrichter@marvell.com> [will: made acpi_cpu_is_threaded() return 'bool'] Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 12 10月, 2019 2 次提交
-
-
由 Josh Poimboeuf 提交于
commit a111b7c0f20e13b54df2fa959b3dc0bdf1925ae6 upstream. Configure arm64 runtime CPU speculation bug mitigations in accordance with the 'mitigations=' cmdline option. This affects Meltdown, Spectre v2, and Speculative Store Bypass. The default behavior is unchanged. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> [will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Marc Zyngier 提交于
commit 517953c2c47f9c00a002f588ac856a5bc70cede3 upstream. The SMCCC ARCH_WORKAROUND_1 service can indicate that although the firmware knows about the Spectre-v2 mitigation, this particular CPU is not vulnerable, and it is thus not necessary to call the firmware on this CPU. Let's use this information to our benefit. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NStefan Wahren <stefan.wahren@i2se.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-