- 31 5月, 2019 1 次提交
-
-
由 Will Deacon 提交于
commit 969f5ea627570e91c9d54403287ee3ed657f58fe upstream. Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum that can prevent interrupts from being taken when single-stepping. This patch implements a software workaround to prevent userspace from effectively being able to disable interrupts. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 22 5月, 2019 2 次提交
-
-
由 Vincenzo Frascino 提交于
commit d263119387de9975d2acba1dfd3392f7c5979c18 upstream. Currently, compat tasks running on arm64 can allocate memory up to TASK_SIZE_32 (UL(0x100000000)). This means that mmap() allocations, if we treat them as returning an array, are not compliant with the sections 6.5.8 of the C standard (C99) which states that: "If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P". Redefine TASK_SIZE_32 to address the issue. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Reported-by: NJann Horn <jannh@google.com> Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> [will: fixed typo in comment] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Will Deacon 提交于
commit 75a19a0202db21638a1c2b424afb867e1f9a2376 upstream. When executing clock_gettime(), either in the vDSO or via a system call, we need to ensure that the read of the counter register occurs within the seqlock reader critical section. This ensures that updates to the clocksource parameters (e.g. the multiplier) are consistent with the counter value and therefore avoids the situation where time appears to go backwards across multiple reads. Extend the vDSO logic so that the seqlock critical section covers the read of the counter register as well as accesses to the data page. Since reads of the counter system registers are not ordered by memory barrier instructions, introduce dependency ordering from the counter read to a subsequent memory access so that the seqlock memory barriers apply to the counter access in both the vDSO and the system call paths. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/linux-arm-kernel/alpine.DEB.2.21.1902081950260.1662@nanos.tec.linutronix.de/Reported-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 5月, 2019 1 次提交
-
-
由 Will Deacon 提交于
commit 03110a5cb2161690ae5ac04994d47ed0cd6cef75 upstream. Our futex implementation makes use of LDXR/STXR loops to perform atomic updates to user memory from atomic context. This can lead to latency problems if we end up spinning around the LL/SC sequence at the expense of doing something useful. Rework our futex atomic operations so that we return -EAGAIN if we fail to update the futex word after 128 attempts. The core futex code will reschedule if necessary and we'll try again later. Cc: <stable@kernel.org> Fixes: 6170a974 ("arm64: Atomic operations") Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 5月, 2019 1 次提交
-
-
由 Marc Zyngier 提交于
[ Upstream commit a6ecfb11bf37743c1ac49b266595582b107b61d4 ] When halting a guest, QEMU flushes the virtual ITS caches, which amounts to writing to the various tables that the guest has allocated. When doing this, we fail to take the srcu lock, and the kernel shouts loudly if running a lockdep kernel: [ 69.680416] ============================= [ 69.680819] WARNING: suspicious RCU usage [ 69.681526] 5.1.0-rc1-00008-g600025238f51-dirty #18 Not tainted [ 69.682096] ----------------------------- [ 69.682501] ./include/linux/kvm_host.h:605 suspicious rcu_dereference_check() usage! [ 69.683225] [ 69.683225] other info that might help us debug this: [ 69.683225] [ 69.683975] [ 69.683975] rcu_scheduler_active = 2, debug_locks = 1 [ 69.684598] 6 locks held by qemu-system-aar/4097: [ 69.685059] #0: 0000000034196013 (&kvm->lock){+.+.}, at: vgic_its_set_attr+0x244/0x3a0 [ 69.686087] #1: 00000000f2ed935e (&its->its_lock){+.+.}, at: vgic_its_set_attr+0x250/0x3a0 [ 69.686919] #2: 000000005e71ea54 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.687698] #3: 00000000c17e548d (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.688475] #4: 00000000ba386017 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.689978] #5: 00000000c2c3c335 (&vcpu->mutex){+.+.}, at: lock_all_vcpus+0x64/0xd0 [ 69.690729] [ 69.690729] stack backtrace: [ 69.691151] CPU: 2 PID: 4097 Comm: qemu-system-aar Not tainted 5.1.0-rc1-00008-g600025238f51-dirty #18 [ 69.691984] Hardware name: rockchip evb_rk3399/evb_rk3399, BIOS 2019.04-rc3-00124-g2feec69fb1 03/15/2019 [ 69.692831] Call trace: [ 69.694072] lockdep_rcu_suspicious+0xcc/0x110 [ 69.694490] gfn_to_memslot+0x174/0x190 [ 69.694853] kvm_write_guest+0x50/0xb0 [ 69.695209] vgic_its_save_tables_v0+0x248/0x330 [ 69.695639] vgic_its_set_attr+0x298/0x3a0 [ 69.696024] kvm_device_ioctl_attr+0x9c/0xd8 [ 69.696424] kvm_device_ioctl+0x8c/0xf8 [ 69.696788] do_vfs_ioctl+0xc8/0x960 [ 69.697128] ksys_ioctl+0x8c/0xa0 [ 69.697445] __arm64_sys_ioctl+0x28/0x38 [ 69.697817] el0_svc_common+0xd8/0x138 [ 69.698173] el0_svc_handler+0x38/0x78 [ 69.698528] el0_svc+0x8/0xc The fix is to obviously take the srcu lock, just like we do on the read side of things since bf308242. One wonders why this wasn't fixed at the same time, but hey... Fixes: bf308242 ("KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock") Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSasha Levin (Microsoft) <sashal@kernel.org>
-
- 27 4月, 2019 1 次提交
-
-
由 Nathan Chancellor 提交于
commit ff8acf929014b7f87315588e0daf8597c8aa9d1c upstream. Commit 045afc24124d ("arm64: futex: Fix FUTEX_WAKE_OP atomic ops with non-zero result value") removed oldval's zero initialization in arch_futex_atomic_op_inuser because it is not necessary. Unfortunately, Android's arm64 GCC 4.9.4 [1] does not agree: ../kernel/futex.c: In function 'do_futex': ../kernel/futex.c:1658:17: warning: 'oldval' may be used uninitialized in this function [-Wmaybe-uninitialized] return oldval == cmparg; ^ In file included from ../kernel/futex.c:73:0: ../arch/arm64/include/asm/futex.h:53:6: note: 'oldval' was declared here int oldval, ret, tmp; ^ GCC fails to follow that when ret is non-zero, futex_atomic_op_inuser returns right away, avoiding the uninitialized use that it claims. Restoring the zero initialization works around this issue. [1]: https://android.googlesource.com/platform/prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/ Cc: stable@vger.kernel.org Fixes: 045afc24124d ("arm64: futex: Fix FUTEX_WAKE_OP atomic ops with non-zero result value") Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 4月, 2019 1 次提交
-
-
由 Will Deacon 提交于
commit 045afc24124d80c6998d9c770844c67912083506 upstream. Rather embarrassingly, our futex() FUTEX_WAKE_OP implementation doesn't explicitly set the return value on the non-faulting path and instead leaves it holding the result of the underlying atomic operation. This means that any FUTEX_WAKE_OP atomic operation which computes a non-zero value will be reported as having failed. Regrettably, I wrote the buggy code back in 2011 and it was upstreamed as part of the initial arm64 support in 2012. The reasons we appear to get away with this are: 1. FUTEX_WAKE_OP is rarely used and therefore doesn't appear to get exercised by futex() test applications 2. If the result of the atomic operation is zero, the system call behaves correctly 3. Prior to version 2.25, the only operation used by GLIBC set the futex to zero, and therefore worked as expected. From 2.25 onwards, FUTEX_WAKE_OP is not used by GLIBC at all. Fix the implementation by ensuring that the return value is either 0 to indicate that the atomic operation completed successfully, or -EFAULT if we encountered a fault when accessing the user mapping. Cc: <stable@kernel.org> Fixes: 6170a974 ("arm64: Atomic operations") Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 3月, 2019 2 次提交
-
-
由 Julien Thierry 提交于
commit 5870970b9a828d8693aa6d15742573289d7dbcd0 upstream. When using VHE, the host needs to clear HCR_EL2.TGE bit in order to interact with guest TLBs, switching from EL2&0 translation regime to EL1&0. However, some non-maskable asynchronous event could happen while TGE is cleared like SDEI. Because of this address translation operations relying on EL2&0 translation regime could fail (tlb invalidation, userspace access, ...). Fix this by properly setting HCR_EL2.TGE when entering NMI context and clear it if necessary when returning to the interrupted context. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Suggested-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: James Morse <james.morse@arm.com> Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Marc Zyngier 提交于
[ Upstream commit 358b28f09f0ab074d781df72b8a671edb1547789 ] The current kvm_psci_vcpu_on implementation will directly try to manipulate the state of the VCPU to reset it. However, since this is not done on the thread that runs the VCPU, we can end up in a strangely corrupted state when the source and target VCPUs are running at the same time. Fix this by factoring out all reset logic from the PSCI implementation and forwarding the required information along with a request to the target VCPU. Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 13 2月, 2019 2 次提交
-
-
由 Will Deacon 提交于
[ Upstream commit 1b57ec8c75279b873639eb44a215479236f93481 ] As of commit 6460d3201471 ("arm64: io: Ensure calls to delay routines are ordered against prior readX()"), MMIO reads smaller than 64 bits fail to compile under clang because we end up mixing 32-bit and 64-bit register operands for the same data processing instruction: ./include/asm-generic/io.h:695:9: warning: value size does not match register size specified by the constraint and modifier [-Wasm-operand-widths] return readb(addr); ^ ./arch/arm64/include/asm/io.h:147:58: note: expanded from macro 'readb' ^ ./include/asm-generic/io.h:695:9: note: use constraint modifier "w" ./arch/arm64/include/asm/io.h:147:50: note: expanded from macro 'readb' ^ ./arch/arm64/include/asm/io.h:118:24: note: expanded from macro '__iormb' asm volatile("eor %0, %1, %1\n" \ ^ Fix the build by casting the macro argument to 'unsigned long' when used as an input to the inline asm. Reported-by: NNick Desaulniers <nick.desaulniers@gmail.com> Reported-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Will Deacon 提交于
[ Upstream commit 6460d32014717686d3b7963595950ba2c6d1bb5e ] A relatively standard idiom for ensuring that a pair of MMIO writes to a device arrive at that device with a specified minimum delay between them is as follows: writel_relaxed(42, dev_base + CTL1); readl(dev_base + CTL1); udelay(10); writel_relaxed(42, dev_base + CTL2); the intention being that the read-back from the device will push the prior write to CTL1, and the udelay will hold up the write to CTL1 until at least 10us have elapsed. Unfortunately, on arm64 where the underlying delay loop is implemented as a read of the architected counter, the CPU does not guarantee ordering from the readl() to the delay loop and therefore the delay loop could in theory be speculated and not provide the desired interval between the two writes. Fix this in a similar manner to PowerPC by introducing a dummy control dependency on the output of readX() which, combined with the ISB in the read of the architected counter, guarantees that a subsequent delay loop can not be executed until the readX() has returned its result. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 26 1月, 2019 2 次提交
-
-
由 Will Deacon 提交于
[ Upstream commit 33309ecda0070506c49182530abe7728850ebe78 ] The dcache_by_line_op macro suffers from a couple of small problems: First, the GAS directives that are currently being used rely on assembler behavior that is not documented, and probably not guaranteed to produce the correct behavior going forward. As a result, we end up with some undefined symbols in cache.o: $ nm arch/arm64/mm/cache.o ... U civac ... U cvac U cvap U cvau This is due to the fact that the comparisons used to select the operation type in the dcache_by_line_op macro are comparing symbols not strings, and even though it seems that GAS is doing the right thing here (undefined symbols by the same name are equal to each other), it seems unwise to rely on this. Second, when patching in a DC CVAP instruction on CPUs that support it, the fallback path consists of a DC CVAU instruction which may be affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE. Solve these issues by unrolling the various maintenance routines and using the conditional directives that are documented as operating on strings. To avoid the complexity of nested alternatives, we move the DC CVAP patching to __clean_dcache_area_pop, falling back to a branch to __clean_dcache_area_poc if DCPOP is not supported by the CPU. Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Qian Cai 提交于
[ Upstream commit 6e8830674ea77f57d57a33cca09083b117a71f41 ] If the kernel is configured with KASAN_EXTRA, the stack size is increased significantly due to setting the GCC -fstack-reuse option to "none" [1]. As a result, it can trigger a stack overrun quite often with 32k stack size compiled using GCC 8. For example, this reproducer https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c can trigger a "corrupted stack end detected inside scheduler" very reliably with CONFIG_SCHED_STACK_END_CHECK enabled. There are other reports at: https://lore.kernel.org/lkml/1542144497.12945.29.camel@gmx.us/ https://lore.kernel.org/lkml/721E7B42-2D55-4866-9C1A-3E8D64F33F9C@gmx.us/ There are just too many functions that could have a large stack with KASAN_EXTRA due to large local variables that have been called over and over again without being able to reuse the stacks. Some noticiable ones are, size 7536 shrink_inactive_list 7440 shrink_page_list 6560 fscache_stats_show 3920 jbd2_journal_commit_transaction 3216 try_to_unmap_one 3072 migrate_page_move_mapping 3584 migrate_misplaced_transhuge_page 3920 ip_vs_lblcr_schedule 4304 lpfc_nvme_info_show 3888 lpfc_debugfs_nvmestat_data.constprop There are other 49 functions over 2k in size while compiling kernel with "-Wframe-larger-than=" on this machine. Hence, it is too much work to change Makefiles for each object to compile without -fsanitize-address-use-after-scope individually. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c23Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 23 1月, 2019 2 次提交
-
-
由 Mark Rutland 提交于
[ Upstream commit b3669b1e1c09890d61109a1a8ece2c5b66804714 ] To allow EL0 (and/or EL1) to use pointer authentication functionality, we must ensure that pointer authentication instructions and accesses to pointer authentication keys are not trapped to EL2. This patch ensures that HCR_EL2 is configured appropriately when the kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK}, ensuring that EL1 can access keys and permit EL0 use of instructions. For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings, and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't bother setting them. This does not enable support for KVM guests, since KVM manages HCR_EL2 itself when running VMs. Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mark Rutland 提交于
[ Upstream commit 4eaed6aa2c628101246bcabc91b203bfac1193f8 ] In KVM we define the configuration of HCR_EL2 for a VHE HOST in HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the non-VHE host flags, and open-code HCR_RW. Further, in head.S we open-code the flags for VHE and non-VHE configurations. In future, we're going to want to configure more flags for the host, so lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S. We now use mov_q to generate the HCR_EL2 value, as we use when configuring other registers in head.S. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NRichard Henderson <richard.henderson@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 10 1月, 2019 2 次提交
-
-
由 Will Deacon 提交于
commit 169113ece0f29ebe884a6cfcf57c1ace04d8a36a upstream. The ARM Linux kernel handles the EABI syscall numbers as follows: 0 - NR_SYSCALLS-1 : Invoke syscall via syscall table NR_SYSCALLS - 0xeffff : -ENOSYS (to be allocated in future) 0xf0000 - 0xf07ff : Private syscall or -ENOSYS if not allocated > 0xf07ff : SIGILL Our compat code gets this wrong and ends up sending SIGILL in response to all syscalls greater than NR_SYSCALLS which have a value greater than 0x7ff in the bottom 16 bits. Fix this by defining the end of the ARM private syscall region and checking the syscall number against that directly. Update the comment while we're at it. Cc: <stable@vger.kernel.org> Cc: Dave Martin <Dave.Martin@arm.com> Reported-by: NPi-Hsun Shih <pihsun@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Will Deacon 提交于
commit df655b75c43fba0f2621680ab261083297fd6d16 upstream. Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as signed, which is sign-extended when assigning to kvm->arch.vtcr. Lucky for us, the architecture currently treats these upper bits as RES0 so, whilst we've been naughty, we haven't set fire to anything yet. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 12月, 2018 1 次提交
-
-
由 Masami Hiramatsu 提交于
commit 874bfc6e upstream. Since commit 4378a7d4 ("arm64: implement syscall wrappers") introduced "__arm64_" prefix to all syscall wrapper symbols in sys_call_table, syscall tracer can not find corresponding metadata from syscall name. In the result, we have no syscall ftrace events on arm64 kernel, and some bpf testcases are failed on arm64. To fix this issue, this introduces custom arch_syscall_match_sym_name() which skips first 8 bytes when comparing the syscall and symbol names. Fixes: 4378a7d4 ("arm64: implement syscall wrappers") Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NNaresh Kamboju <naresh.kamboju@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 11月, 2018 1 次提交
-
-
由 Nathan Chancellor 提交于
[ Upstream commit b5bb4258 ] Clang warns that if the default case is taken, ret will be uninitialized. ./arch/arm64/include/asm/percpu.h:196:2: warning: variable 'ret' is used uninitialized whenever switch default is taken [-Wsometimes-uninitialized] default: ^~~~~~~ ./arch/arm64/include/asm/percpu.h:200:9: note: uninitialized use occurs here return ret; ^~~ ./arch/arm64/include/asm/percpu.h:157:19: note: initialize the variable 'ret' to silence this warning unsigned long ret, loop; ^ = 0 This warning appears several times while building the erofs filesystem. While it's not strictly wrong, the BUILD_BUG will prevent this from becoming a true problem. Initialize ret to 0 in the default case right before the BUILD_BUG to silence all of these warnings. Reported-by: NPrasad Sodagudi <psodagud@codeaurora.org> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NDennis Zhou <dennis@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 11 9月, 2018 1 次提交
-
-
由 Miguel Ojeda 提交于
All other uses of "asm goto" go through asm_volatile_goto, which avoids a miscompile when using GCC < 4.8.2. Replace our open-coded "asm goto" statements with the asm_volatile_goto macro to avoid issues with older toolchains. Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 9月, 2018 2 次提交
-
-
由 Steven Price 提交于
The lock has never been used and the page tables are protected by mmu_lock in struct kvm. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
- 24 8月, 2018 1 次提交
-
-
由 Will Deacon 提交于
As of commit fd1102f0 ("mm: mmu_notifier fix for tlb_end_vma"), asm-generic/tlb.h now calls tlb_flush() from a static inline function, so we need to make sure that it's declared before #including the asm-generic header in the arch header. Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 8月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
In order to generate Group0 SGIs, let's add some decoding logic to access_gic_sgi(), and pass the generating group accordingly. Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 03 8月, 2018 1 次提交
-
-
由 Palmer Dabbelt 提交于
It appears arm64 copied arm's GENERIC_IRQ_MULTI_HANDLER code, but made it unconditional. Converts the arm64 code to use the new generic code, which simply consists of deleting the arm64 code and setting MULTI_IRQ_HANDLER instead. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: linux@armlinux.org.uk Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: jonas@southpole.se Cc: stefan.kristiansson@saunalahti.fi Cc: shorne@gmail.com Cc: jason@lakedaemon.net Cc: marc.zyngier@arm.com Cc: Arnd Bergmann <arnd@arndb.de> Cc: nicolas.pitre@linaro.org Cc: vladimir.murzin@arm.com Cc: keescook@chromium.org Cc: jinb.park7@gmail.com Cc: yamada.masahiro@socionext.com Cc: alexandre.belloni@bootlin.com Cc: pombredanne@nexb.com Cc: Greg KH <gregkh@linuxfoundation.org> Cc: kstewart@linuxfoundation.org Cc: jhogan@kernel.org Cc: mark.rutland@arm.com Cc: ard.biesheuvel@linaro.org Cc: james.morse@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: openrisc@lists.librecores.org Link: https://lkml.kernel.org/r/20180622170126.6308-4-palmer@sifive.com
-
- 02 8月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
Commit 2c4541e2 ("mm: use vma_init() to initialize VMAs on stack and data segments") tried to initialize various left-over ad-hoc vma's "properly", but actually made things worse for the temporary vma's used for TLB flushing. vma_init() doesn't actually initialize all of the vma, just a few fields, so doing something like - struct vm_area_struct vma = { .vm_mm = tlb->mm, }; + struct vm_area_struct vma; + + vma_init(&vma, tlb->mm); was actually very bad: instead of having a nicely initialized vma with every field but "vm_mm" zeroed, you'd have an entirely uninitialized vma with only a couple of fields initialized. And they weren't even fields that the code in question mostly cared about. The flush_tlb_range() function takes a "struct vma" rather than a "struct mm_struct", because a few architectures actually care about what kind of range it is - being able to only do an ITLB flush if it's a range that doesn't have data accesses enabled, for example. And all the normal users already have the vma for doing the range invalidation. But a few people want to call flush_tlb_range() with a range they just made up, so they also end up using a made-up vma. x86 just has a special "flush_tlb_mm_range()" function for this, but other architectures (arm and ia64) do the "use fake vma" thing instead, and thus got caught up in the vma_init() changes. At the same time, the TLB flushing code really doesn't care about most other fields in the vma, so vma_init() is just unnecessary and pointless. This fixes things by having an explicit "this is just an initializer for the TLB flush" initializer macro, which is used by the arm/arm64/ia64 people who mis-use this interface with just a dummy vma. Fixes: 2c4541e2 ("mm: use vma_init() to initialize VMAs on stack and data segments") Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: John Stultz <john.stultz@linaro.org> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 7月, 2018 1 次提交
-
-
由 Ard Biesheuvel 提交于
When kernel mode NEON was first introduced to the arm64 kernel, every call to kernel_neon_begin()/_end() stacked resp. unstacked the entire NEON register file, making it worthwile to reduce the number of used NEON registers to a bare minimum, and only stack those. kernel_neon_begin_partial() was introduced for this purpose, but after the refactoring for SVE and other changes, it no longer exists and was simply #define'd to kernel_neon_begin() directly. In the mean time, all users have been updated, so let's remove the fallback macro. Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2018 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Make sure to initialize all VMAs properly, not only those which come from vm_area_cachep. Link: http://lkml.kernel.org/r/20180724121139.62570-3-kirill.shutemov@linux.intel.comSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 7月, 2018 2 次提交
-
-
由 Laura Abbott 提交于
This adds support for the STACKLEAK gcc plugin to arm64 by implementing stackleak_check_alloca(), based heavily on the x86 version, and adding the two helpers used by the stackleak common code: current_top_of_stack() and on_thread_stack(). The stack erasure calls are made at syscall returns. Additionally, this disables the plugin in hypervisor and EFI stub code, which are out of scope for the protection. Acked-by: NAlexander Popov <alex.popov@linux.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
In preparation for enabling the stackleak plugin on arm64, we need a way to get the bounds of the current stack. Extend on_accessible_stack to get this information. Acked-by: NAlexander Popov <alex.popov@linux.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> [will: folded in fix for allmodconfig build breakage w/ sdei] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 7月, 2018 1 次提交
-
-
由 AKASHI Takahiro 提交于
This is a fix against the issue that crash dump kernel may hang up during booting, which can happen on any ACPI-based system with "ACPI Reclaim Memory." (kernel messages after panic kicked off kdump) (snip...) Bye! (snip...) ACPI: Core revision 20170728 pud=000000002e7d0003, *pmd=000000002e7c0003, *pte=00e8000039710707 Internal error: Oops: 96000021 [#1] SMP Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.0-rc6 #1 task: ffff000008d05180 task.stack: ffff000008cc0000 PC is at acpi_ns_lookup+0x25c/0x3c0 LR is at acpi_ds_load1_begin_op+0xa4/0x294 (snip...) Process swapper/0 (pid: 0, stack limit = 0xffff000008cc0000) Call trace: (snip...) [<ffff0000084a6764>] acpi_ns_lookup+0x25c/0x3c0 [<ffff00000849b4f8>] acpi_ds_load1_begin_op+0xa4/0x294 [<ffff0000084ad4ac>] acpi_ps_build_named_op+0xc4/0x198 [<ffff0000084ad6cc>] acpi_ps_create_op+0x14c/0x270 [<ffff0000084acfa8>] acpi_ps_parse_loop+0x188/0x5c8 [<ffff0000084ae048>] acpi_ps_parse_aml+0xb0/0x2b8 [<ffff0000084a8e10>] acpi_ns_one_complete_parse+0x144/0x184 [<ffff0000084a8e98>] acpi_ns_parse_table+0x48/0x68 [<ffff0000084a82cc>] acpi_ns_load_table+0x4c/0xdc [<ffff0000084b32f8>] acpi_tb_load_namespace+0xe4/0x264 [<ffff000008baf9b4>] acpi_load_tables+0x48/0xc0 [<ffff000008badc20>] acpi_early_init+0x9c/0xd0 [<ffff000008b70d50>] start_kernel+0x3b4/0x43c Code: b9008fb9 2a000318 36380054 32190318 (b94002c0) ---[ end trace c46ed37f9651c58e ]--- Kernel panic - not syncing: Fatal exception Rebooting in 10 seconds.. (diagnosis) * This fault is a data abort, alignment fault (ESR=0x96000021) during reading out ACPI table. * Initial ACPI tables are normally stored in system ram and marked as "ACPI Reclaim memory" by the firmware. * After the commit f56ab9a5 ("efi/arm: Don't mark ACPI reclaim memory as MEMBLOCK_NOMAP"), those regions are differently handled as they are "memblock-reserved", without NOMAP bit. * So they are now excluded from device tree's "usable-memory-range" which kexec-tools determines based on a current view of /proc/iomem. * When crash dump kernel boots up, it tries to accesses ACPI tables by mapping them with ioremap(), not ioremap_cache(), in acpi_os_ioremap() since they are no longer part of mapped system ram. * Given that ACPI accessor/helper functions are compiled in without unaligned access support (ACPI_MISALIGNMENT_NOT_SUPPORTED), any unaligned access to ACPI tables can cause a fatal panic. With this patch, acpi_os_ioremap() always honors memory attribute information provided by the firmware (EFI) and retaining cacheability allows the kernel safe access to ACPI tables. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reported-by and Tested-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 7月, 2018 1 次提交
-
-
由 Lukas Wunner 提交于
There's one ARM, one x86_32 and one x86_64 version of efi_open_volume() which can be folded into a single shared version by masking their differences with the efi_call_proto() macro introduced by commit: 3552fdf2 ("efi: Allow bitness-agnostic protocol calls"). To be able to dereference the device_handle attribute from the efi_loaded_image_t table in an arch- and bitness-agnostic manner, introduce the efi_table_attr() macro (which already exists for x86) to arm and arm64. No functional change intended. Signed-off-by: NLukas Wunner <lukas@wunner.de> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20180720014726.24031-7-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 21 7月, 2018 3 次提交
-
-
由 James Morse 提交于
The get/set events helpers to do some work to check reserved and padding fields are zero. This is useful on 32bit too. Move this code into virt/kvm/arm/arm.c, and give the arch code some underscores. This is temporarily hidden behind __KVM_HAVE_VCPU_EVENTS until 32bit is wired up. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NDongjiu Geng <gengdongjiu@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Dongjiu Geng 提交于
For the migrating VMs, user space may need to know the exception state. For example, in the machine A, KVM make an SError pending, when migrate to B, KVM also needs to pend an SError. This new IOCTL exports user-invisible states related to SError. Together with appropriate user space changes, user space can get/set the SError exception state to do migrate/snapshot/suspend. Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: NJames Morse <james.morse@arm.com> [expanded documentation wording] Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When running on a non-VHE system, we initialize tpidr_el2 to contain the per-CPU offset required to reach per-cpu variables. Actually, we initialize it twice: the first time as part of the EL2 initialization, by copying tpidr_el1 into its el2 counterpart, and another time by calling into __kvm_set_tpidr_el2. It turns out that the first part is wrong, as it includes the distance between the kernel mapping and the linear mapping, while EL2 only cares about the linear mapping. This was the last vestige of the first per-cpu use of tpidr_el2 that came in with SDEI. The only caller then was hyp_panic(), and its now using the pc-relative get_host_ctxt() stuff, instead of kimage addresses from the literal pool. It is not a big deal, as we override it straight away, but it is slightly confusing. In order to clear said confusion, let's set this directly as part of the hyp-init code, and drop the ad-hoc HYP helper. Reviewed-by: NJames Morse <james.morse@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 12 7月, 2018 5 次提交
-
-
由 Mark Rutland 提交于
To minimize the risk of userspace-controlled values being used under speculation, this patch adds pt_regs based syscall wrappers for arm64, which pass the minimum set of required userspace values to syscall implementations. For each syscall, a wrapper which takes a pt_regs argument is automatically generated, and this extracts the arguments before calling the "real" syscall implementation. Each syscall has three functions generated: * __do_<compat_>sys_<name> is the "real" syscall implementation, with the expected prototype. * __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper, inherited from common code. This takes a series of long parameters, casting each to the requisite types required by the "real" syscall implementation in __do_<compat_>sys_<name>. This wrapper *may* not be necessary on arm64 given the AAPCS rules on unused register bits, but it seemed safer to keep the wrapper for now. * __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and extracts *only* the relevant register values, passing these on to the __se_<compat_>sys_<name> wrapper. The syscall invocation code is updated to handle the calling convention required by __arm64_<compat_>_sys_<name>, and passes a single struct pt_regs pointer. The compiler can fold the syscall implementation and its wrappers, such that the overhead of this approach is minimized. Note that we play games with sys_ni_syscall(). It can't be defined with SYSCALL_DEFINE0() because we must avoid the possibility of error injection. Additionally, there are a couple of locations where we need to call it from C code, and we don't (currently) have a ksys_ni_syscall(). While it has no wrapper, passing in a redundant pt_regs pointer is benign per the AAPCS. When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is defines for sys_ni_syscall(). Since we need to treat it differently for in-kernel calls and the syscall tables, the prototype is defined as-required. The wrappers are largely the same as their x86 counterparts, but simplified as we don't have a variety of compat calling conventions that require separate stubs. Unlike x86, we have some zero-argument compat syscalls, and must define COMPAT_SYSCALL_DEFINE0() to ensure that these are also given an __arm64_compat_sys_ prefix. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
In preparation for converting to pt_regs syscall wrappers, convert our existing compat wrappers to C. This will allow the pt_regs wrappers to be automatically generated, and will allow for the compat register manipulation to be folded in with the pt_regs accesses. To avoid confusion with the upcoming pt_regs wrappers and existing compat wrappers provided by core code, the C wrappers are renamed to compat_sys_aarch32_<syscall>. With the assembly wrappers gone, we can get rid of entry32.S and the associated boilerplate. Note that these must call the ksys_* syscall entry points, as the usual sys_* entry points will be modified to take a single pt_regs pointer argument. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that the syscall invocation logic is in C, we can migrate the rest of the syscall entry logic over, so that the entry assembly needn't look at the register values at all. The SVE reset across syscall logic now unconditionally clears TIF_SVE, but sve_user_disable() will only write back to CPACR_EL1 when SVE is actually enabled. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NDave Martin <dave.martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
In preparation for invoking arbitrary syscalls from C code, let's define a type for an arbitrary syscall, matching the parameter passing rules of the AAPCS. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
The arm64 sigreturn* syscall handlers are non-standard. Rather than taking a number of user parameters in registers as per the AAPCS, they expect the pt_regs as their sole argument. To make this work, we override the syscall definitions to invoke wrappers written in assembly, which mov the SP into x0, and branch to their respective C functions. On other architectures (such as x86), the sigreturn* functions take no argument and instead use current_pt_regs() to acquire the user registers. This requires less boilerplate code, and allows for other features such as interposing C code in this path. This patch takes the same approach for arm64. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tentatively-reviewed-by: NDave Martin <dave.martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-