- 26 8月, 2021 4 次提交
-
-
由 Alexandru Elisei 提交于
Commit 31c00d2a ("arm64: Disable fine grained traps on boot") zeroed the fine grained trap registers to prevent unwanted register traps from occuring. However, for the PMSNEVFR_EL1 register, the corresponding HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write traps. Fixes: 31c00d2a ("arm64: Disable fine grained traps on boot") Cc: <stable@vger.kernel.org> # 5.13.x Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: NMark Brown <broonie@kernel.org> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
The memory attributes attached to memory regions depend on architecture specific mappings. For some memory regions, the attributes specified by firmware (eg uncached) are not sufficient to determine how a memory region should be mapped by an OS (for instance a region that is define as uncached in firmware can be mapped as Normal or Device memory on arm64) and therefore the OS must be given control on how to map the region to match the expected mapping behaviour (eg if a mapping is requested with memory semantics, it must allow unaligned accesses). Rework acpi_os_map_memory() and acpi_os_ioremap() back-end to split them into two separate code paths: acpi_os_memmap() -> memory semantics acpi_os_ioremap() -> MMIO semantics The split allows the architectural implementation back-ends to detect the default memory attributes required by the mapping in question (ie the mapping API defines the semantics memory vs MMIO) and map the memory accordingly. Link: https://lore.kernel.org/linux-arm-kernel/31ffe8fc-f5ee-2858-26c5-0fd8bdd68702@arm.comTested-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Xujun Leng 提交于
Fix a typo in the comment of macro pud_offset_phys(). Signed-off-by: NXujun Leng <lengxujun2007@126.com> Link: https://lore.kernel.org/r/20210825150526.12582-1-lengxujun2007@126.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Commit 77097ae5 ("most of set_current_blocked() callers want SIGKILL/SIGSTOP removed from set") extended set_current_blocked() to remove SIGKILL and SIGSTOP from the new signal set and updated all callers accordingly. Unfortunately, this collided with the merge of the arm64 architecture, which duly removes these signals when restoring the compat sigframe, as this was what was previously done by arch/arm/. Remove the redundant call to sigdelsetmask() from compat_restore_sigframe(). Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210825093911.24493-1-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 8月, 2021 1 次提交
-
-
由 Will Deacon 提交于
This partially reverts commit 16c9afc7. Alex Bee reports a regression in 5.14 on their RK3328 SoC when configuring the PL330 DMA controller: | ------------[ cut here ]------------ | WARNING: CPU: 2 PID: 373 at kernel/dma/mapping.c:235 dma_map_resource+0x68/0xc0 | Modules linked in: spi_rockchip(+) fuse | CPU: 2 PID: 373 Comm: systemd-udevd Not tainted 5.14.0-rc7 #1 | Hardware name: Pine64 Rock64 (DT) | pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--) | pc : dma_map_resource+0x68/0xc0 | lr : pl330_prep_slave_fifo+0x78/0xd0 This appears to be because dma_map_resource() is being called for a physical address which does not correspond to a memory address yet does have a valid 'struct page' due to the way in which the vmemmap is constructed. Prior to 16c9afc7 ("arm64/mm: drop HAVE_ARCH_PFN_VALID"), the arm64 implementation of pfn_valid() called memblock_is_memory() to return 'false' for such regions and the DMA mapping request would proceed. However, now that we are using the generic implementation where only the presence of the memory map entry is considered, we return 'true' and erroneously fail with DMA_MAPPING_ERROR because we identify the region as DRAM. Although fixing this in the DMA mapping code is arguably the right fix, it is a risky, cross-architecture change at this stage in the cycle. So just revert arm64 back to its old pfn_valid() implementation for v5.14. The change to the generic pfn_valid() code is preserved from the original patch, so as to avoid impacting other architectures. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Christoph Hellwig <hch@lst.de> Reported-by: NAlex Bee <knaerzche@gmail.com> Link: https://lore.kernel.org/r/d3a3c828-b777-faf8-e901-904995688437@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 24 8月, 2021 4 次提交
-
-
由 Mark Brown 提交于
Currently we "handle" failure to allocate the SVE register storage by doing a BUG_ON() and hoping for the best. This is obviously not great and the memory allocation failure will already be loud enough without the BUG_ON(). As the comment says it is a corner case but let's try to do a bit better, remove the BUG_ON() and add code to handle the failure in the callers. For the ptrace and signal code we can return -ENOMEM gracefully however we have no real error reporting path available to us for the SVE access trap so instead generate a SIGKILL if the allocation fails there. This at least means that we won't try to soldier on and end up trying to access the nonexistant state and while it's obviously not ideal for userspace SIGKILL doesn't allow any handling so minimises the ABI impact, making it easier to improve the interface later if we come up with a better idea. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
At some point it would be nice to avoid the need to manually encode SVE instructions, add a note of the binutils version required to save looking it up. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210816125024.8112-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
The use of macros for the actual function bodies means legibility is always going to be a bit of a challenge, especially while we can't rely on SVE support in the toolchain, but this helps a little. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210812201143.35578-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 8月, 2021 1 次提交
-
-
由 Changbin Du 提交于
Replace the obsolete and ambiguos macro in_irq() with new macro in_hardirq(). Signed-off-by: NChangbin Du <changbin.du@gmail.com> Link: https://lore.kernel.org/r/20210814005405.2658-1-changbin.du@gmail.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2021 7 次提交
-
-
由 Steen Hegelund 提交于
This adds the interrupt for the Sparx5 Frame DMA. If this configuration is present the Sparx5 SwitchDev driver will use the Frame DMA feature, and if not it will use register based injection and extraction for sending and receiving frames to the CPU. Signed-off-by: NSteen Hegelund <steen.hegelund@microchip.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Will Deacon 提交于
The scheduler now knows enough about these braindead systems to place 32-bit tasks accordingly, so throw out the safety checks and allow the ret-to-user path to avoid do_notify_resume() if there is nothing to do. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-16-will@kernel.org
-
由 Will Deacon 提交于
Allow systems with mismatched 32-bit support at EL0 to run 32-bit applications based on a new kernel parameter. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-15-will@kernel.org
-
由 Will Deacon 提交于
Since 32-bit applications will be killed if they are caught trying to execute on a 64-bit-only CPU in a mismatched system, advertise the set of 32-bit capable CPUs to userspace in sysfs. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-14-will@kernel.org
-
由 Will Deacon 提交于
If we want to support 32-bit applications, then when we identify a CPU with mismatched 32-bit EL0 support we must ensure that we will always have an active 32-bit CPU available to us from then on. This is important for the scheduler, because is_cpu_allowed() will be constrained to 32-bit CPUs for compat tasks and forced migration due to a hotplug event will hang if no 32-bit CPUs are available. On detecting a mismatch, prevent offlining of either the mismatching CPU if it is 32-bit capable, or find the first active 32-bit capable CPU otherwise. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-13-will@kernel.org
-
由 Will Deacon 提交于
When exec'ing a 32-bit task on a system with mismatched support for 32-bit EL0, try to ensure that it starts life on a CPU that can actually run it. Similarly, when exec'ing a 64-bit task on such a system, try to restore the old affinity mask if it was previously restricted. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Reviewed-by: NQuentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-12-will@kernel.org
-
由 Will Deacon 提交于
Provide an implementation of task_cpu_possible_mask() so that we can prevent 64-bit-only cores being added to the 'cpus_mask' for compat tasks on systems with mismatched 32-bit support at EL0, Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-11-will@kernel.org
-
- 19 8月, 2021 1 次提交
-
-
由 Mark Rutland 提交于
In __init_el2_timers we initialize CNTHCTL_EL2.{EL1PCEN,EL1PCTEN} with a RMW sequence, leaving all other bits UNKNOWN. In general, we should initialize all bits in a register rather than using an RMW sequence, since most bits are UNKNOWN out of reset, and as new bits are added to the reigster their reset value might not result in expected behaviour. In the case of CNTHCTL_EL2, FEAT_ECV added a number of new control bits in previously RES0 bits, which reset to UNKNOWN values, and may cause issues for EL1 and EL0: * CNTHCTL_EL2.ECV enables the CNTPOFF_EL2 offset (which itself resets to an UNKNOWN value) at EL0 and EL1. Since the offset could reset to distinct values across CPUs, when the control bit resets to 1 this could break timekeeping generally. * CNTHCTL_EL2.{EL1TVT,EL1TVCT} trap EL0 and EL1 accesses to the EL1 virtual timer/counter registers to EL2. When reset to 1, this could cause unexpected traps to EL2. Initializing these bits to zero avoids these problems, and all other bits in CNTHCTL_EL2 other than EL1PCEN and EL1PCTEN can safely be reset to zero. This patch ensures we initialize CNTHCTL_EL2 accordingly, only setting EL1PCEN and EL1PCTEN, and setting all other bits to zero. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oupton@google.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: NOliver Upton <oupton@google.com> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210818161535.52786-1-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 17 8月, 2021 1 次提交
-
-
由 Rajendra Nayak 提交于
qup-i2c devices on sc7180 are clocked with a fixed clock (19.2 MHz) Though qup-i2c does not support DVFS, it still needs to vote for a performance state on 'CX' to satisfy the 19.2 Mhz clock frequency requirement. Use 'required-opps' to pass this information from device tree, and also add the power-domains property to specify the CX power-domain. Signed-off-by: NRajendra Nayak <rnayak@codeaurora.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 8月, 2021 2 次提交
-
-
由 Andrew Delgadillo 提交于
commit a5b8ca97 ("arm64: do not descend to vdso directories twice") changes the cleaning behavior of arm64's vdso files, in that vdso.lds, vdso.so, and vdso.so.dbg are not removed upon a 'make clean/mrproper': $ make defconfig ARCH=arm64 $ make ARCH=arm64 $ make mrproper ARCH=arm64 $ git clean -nxdf Would remove arch/arm64/kernel/vdso/vdso.lds Would remove arch/arm64/kernel/vdso/vdso.so Would remove arch/arm64/kernel/vdso/vdso.so.dbg To remedy this, manually descend into arch/arm64/kernel/vdso upon cleaning. After this commit: $ make defconfig ARCH=arm64 $ make ARCH=arm64 $ make mrproper ARCH=arm64 $ git clean -nxdf <empty> Similar results are obtained for the vdso32 equivalent. Signed-off-by: NAndrew Delgadillo <adelg@google.com> Cc: stable@vger.kernel.org Fixes: a5b8ca97 ("arm64: do not descend to vdso directories twice") Link: https://lore.kernel.org/r/20210810231755.1743524-1-adelg@google.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Anshuman Khandual 提交于
ID_AA64DFR0_PMUVER_IMP_DEF, indicating an "implementation defined" PMU, never actually gets used although there are '0xf' instances scattered all around. Use the symbolic name instead of the raw hex constant. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1628652427-24695-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 06 8月, 2021 2 次提交
-
-
由 Will Deacon 提交于
When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
When converting arm64 to modern assembler annotations __bad_stack was left as a raw local label without annotations. While this will have little if any practical impact at present it may cause issues in the future if we start using the annotations for things like reliable stack trace. Add SYM_CODE annotations to fix this. Signed-off-by: NMark Brown <broonie@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210804181710.19059-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 8月, 2021 7 次提交
-
-
由 Caleb Connolly 提交于
Fix the upper guard and the "removed_region", this fixes the random crashes which used to occur in memory intensive loads. I'm not sure WHY the upper guard being 0x2000 instead of 0x1000 doesn't fix this, but it HAS to be 0x1000. Fixes: e60fd5ac ("arm64: dts: qcom: sdm845-oneplus-common: guard rmtfs-mem") Signed-off-by: NCaleb Connolly <caleb@connolly.tech> Link: https://lore.kernel.org/r/20210720153125.43389-2-caleb@connolly.techSigned-off-by: NBjorn Andersson <bjorn.andersson@linaro.org>
-
由 Petr Vorel 提交于
As the default definition breaks booting angler: [ 1.862561] printk: console [ttyMSM0] enabled [ 1.872260] msm_serial: driver initialized D - 15524 - pm_driver_init, Delta cont_splash_mem was introduced in 74d6d0a1, but the problem manifested after commit '86588296 ("fdt: Properly handle "no-map" field in the memory region")'. Disabling it because Angler's firmware does not report where the memory is allocated (dmesg from downstream kernel): [ 0.000000] cma: Found cont_splash_mem@0, memory base 0x0000000000000000, size 16 MiB, limit 0x0000000000000000 [ 0.000000] cma: CMA: reserved 16 MiB at 0x0000000000000000 for cont_splash_mem Similar issue might be on Google Nexus 5X (lg-bullhead). Other MSM8992/4 are known to report correct address. Fixes: 74d6d0a1 ("arm64: dts: qcom: msm8994/8994-kitakami: Fix up the memory map") Suggested-by: NKonrad Dybcio <konradybcio@gmail.com> Signed-off-by: NPetr Vorel <petr.vorel@gmail.com> Link: https://lore.kernel.org/r/20210622191019.23771-1-petr.vorel@gmail.comSigned-off-by: NBjorn Andersson <bjorn.andersson@linaro.org>
-
由 Mark Rutland 提交于
When handling an exception from EL0, we perform the entry work in that exception's C handler, and once the C handler has finished, we return back to the entry assembly. Subsequently in the common `ret_to_user` assembly we perform the exit work that balances with the entry work. This can be somewhat difficult to follow, and makes it hard to rework the return paths (e.g. to pass additional context to the exit code, or to have exception return logic for specific exceptions). This patch reworks the entry code such that each EL0 C exception handler is responsible for both the entry and exit work. This clearly balances the two (and will permit additional variation in future), and avoids an unnecessary bounce between assembly and C in the common case, leaving `ret_from_fork` as the only place assembly has to call the exit code. This means that the exit work is now inlined into the C handler, which is already the case for the entry work, and allows the compiler to generate better code (e.g. by immediately returning when there is no exit work to perform). To align with other exception entry/exit helpers, enter_from_user_mode() is updated to take the EL0 pt_regs as a parameter, though this is currently unused. There should be no functional change as a result of this patch. However, this should lead to slightly better backtraces when an error is encountered within do_notify_resume(), as the C handler should appear in the backtrace, indicating the specific exception that the kernel was entered with. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NJoey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-5-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
In `ret_to_user` we perform some conditional work depending on the thread flags, then perform some IRQ/context tracking which is intended to balance with the IRQ/context tracking performed in the entry C code. For simplicity and consistency, it would be preferable to move this all to C. As a step towards that, this patch moves the conditional work and IRQ/context tracking into a C helper function. To aid bisectability, this is called from the `ret_to_user` assembly, and a subsequent patch will move the call to C code. As local_daif_mask() handles all necessary tracing and PMR manipulation, we no longer need to handle this explicitly. As we call exit_to_user_mode() directly, the `user_enter_irqoff` macro is no longer used, and can be removed. As enter_from_user_mode() and exit_to_user_mode() are no longer called from assembly, these can be made static, and as these are typically very small, they are marked __always_inline to avoid the overhead of a function call. For now, enablement of single-step is left in entry.S, and for this we still need to read the flags in ret_to_user(). It is safe to read this separately as TIF_SINGLESTEP is not part of _TIF_WORK_MASK. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NJoey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-4-mark.rutland@arm.com [catalin.marinas@arm.com: removed unused gic_prio_kentry_setup macro] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
When entering an exception, we must perform irq/context state management before we can use instrumentable C code. Similarly, when exiting an exception we cannot use instrumentable C code after we perform irq/context state management. Originally, we'd intended that the enter_from_*() and exit_to_*() helpers would enforce this by virtue of being the first and last functions called, respectively, in an exception handler. However, as they now call instrumentable code themselves, this is not as clearly true. To make this more robust, this patch splits the irq/context state management into separate helpers, with all the helpers commented to make their intended purpose more obvious. In exit_to_kernel_mode() we'll now check TFSR_EL1 before we assert that IRQs are disabled, but this ordering is not important, and other than this there should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NJoey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-3-mark.rutland@arm.com [catalin.marinas@arm.com: comment typos fix-up] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
To make the various entry/exit helpers easier to understand and easier to compare, this patch moves all the entry/exit helpers to be adjacent at the top of entry-common.c, rather than being spread out throughout the file. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NJoey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20210802140733.52716-2-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Bjorn Andersson 提交于
The MSM8996 supports CPU frequency scaling, so enable the clock driver for this. Acked-by: NKonrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20210804193042.1155398-1-bjorn.andersson@linaro.orgSigned-off-by: NBjorn Andersson <bjorn.andersson@linaro.org>
-
- 04 8月, 2021 2 次提交
-
-
由 Arnd Bergmann 提交于
The MaverickCrunch support for ep93xx never made it into glibc and was removed from gcc in its 4.8 release in 2012. It is now one of the last parts of arch/arm/ that fails to build with the clang integrated assembler, which is unlikely to ever want to support it. The two alternatives are to force the use of binutils/gas when building the crunch support, or to remove it entirely. According to Hartley Sweeten: "Martin Guy did a lot of work trying to get the maverick crunch working but I was never able to successfully use it for anything. It "kind" of works but depending on the EP93xx silicon revision there are still a number of hardware bugs that either give imprecise or garbage results. I have no problem with removing the kernel support for the maverick crunch." Unless someone else comes up with a good reason to keep it around, remove it now. This touches mostly the ep93xx platform, but removes a bit of code from ARM common ptrace and signal frame handling as well. If there are remaining users of MaverickCrunch, they can use LTS kernels for at least another five years before kernel support ends. Link: https://lore.kernel.org/linux-arm-kernel/20210802141245.1146772-1-arnd@kernel.org/ Link: https://lore.kernel.org/linux-arm-kernel/20210226164345.3889993-1-arnd@kernel.org/ Link: https://github.com/ClangBuiltLinux/linux/issues/1272 Link: https://gcc.gnu.org/legacy-ml/gcc/2008-03/msg01063.html Cc: "Martin Guy" <martinwguy@martinwguy@gmail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Alex Elder 提交于
This reverts commit b79c6fba, reversing these changes made to 0ac26271: commit 6a0eb6c9 ("dt-bindings: net: qcom,ipa: make imem interconnect optional") commit f8bd3c82 ("arm64: dts: qcom: sc7280: add IPA information") commit fd0f72c3 ("arm64: dts: qcom: sc7180: define ipa_fw_mem node") I intend for these commits to go through the Qualcomm repository, to avoid conflicting with other activity being merged there. Signed-off-by: NAlex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20210802233019.800250-1-elder@linaro.orgSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 03 8月, 2021 8 次提交
-
-
由 Jason Wang 提交于
The double 'the' after 'If' in this comment "If the the TLB range ops are supported..." is repeated. Consequently, one 'the' should be removed from the comment. Signed-off-by: NJason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210803142020.124230-1-wangborong@cdjrlc.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Masahiro Yamada 提交于
Currently, the (z)install targets in arch/arm64/Makefile descend into arch/arm64/boot/Makefile to invoke the shell script, but there is no good reason to do so. arch/arm64/Makefile can run the shell script directly. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Link: https://lore.kernel.org/r/20210729140527.443116-1-masahiroy@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Yee Lee 提交于
MTE support needs to be optionally disabled in runtime for HW issue workaround, FW development and some evaluation works on system resource and performance. This patch makes two changes: (1) moves init of tag-allocation bits(ATA/ATA0) to cpu_enable_mte() as not cached in TLB. (2) allows ID_AA64PFR1_EL1.MTE to be overridden on its shadow value by giving "arm64.nomte" on cmdline. When the feature value is off, ATA and TCF will not set and the related functionalities are accordingly suppressed. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Suggested-by: NMarc Zyngier <maz@kernel.org> Suggested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NYee Lee <yee.lee@mediatek.com> Link: https://lore.kernel.org/r/20210803070824.7586-2-yee.lee@mediatek.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
When the function_graph tracer is in use, arch_stack_walk() may unwind the stack incorrectly, erroneously reporting itself, missing the final entry which is being traced, and reporting all traced entries between these off-by-one from where they should be. When ftrace hooks a function return, the original return address is saved to the fgraph ret_stack, and the return address in the LR (or the function's frame record) is replaced with `return_to_handler`. When arm64's unwinder encounter frames returning to `return_to_handler`, it finds the associated original return address from the fgraph ret stack, assuming the most recent `ret_to_hander` entry on the stack corresponds to the most recent entry in the fgraph ret stack, and so on. When arch_stack_walk() is used to dump the current task's stack, it starts from the caller of arch_stack_walk(). However, arch_stack_walk() can be traced, and so may push an entry on to the fgraph ret stack, leaving the fgraph ret stack offset by one from the expected position. This can be seen when dumping the stack via /proc/self/stack, where enabling the graph tracer results in an unexpected `stack_trace_save_tsk` entry at the start of the trace, and `el0_svc` missing form the end of the trace. This patch fixes this by marking arch_stack_walk() as notrace, as we do for all other functions on the path to ftrace_graph_get_ret_stack(). While a few helper functions are not marked notrace, their calls/returns are balanced, and will have no observable effect when examining the fgraph ret stack. It is possible for an exeption boundary to cause a similar offset if the return address of the interrupted context was in the LR. Fixing those cases will require some more substantial rework, and is left for subsequent patches. Before: | # cat /proc/self/stack | [<0>] proc_pid_stack+0xc4/0x140 | [<0>] proc_single_show+0x6c/0x120 | [<0>] seq_read_iter+0x240/0x4e0 | [<0>] seq_read+0xe8/0x140 | [<0>] vfs_read+0xb8/0x1e4 | [<0>] ksys_read+0x74/0x100 | [<0>] __arm64_sys_read+0x28/0x3c | [<0>] invoke_syscall+0x50/0x120 | [<0>] el0_svc_common.constprop.0+0xc4/0xd4 | [<0>] do_el0_svc+0x30/0x9c | [<0>] el0_svc+0x2c/0x54 | [<0>] el0t_64_sync_handler+0x1a8/0x1b0 | [<0>] el0t_64_sync+0x198/0x19c | # echo function_graph > /sys/kernel/tracing/current_tracer | # cat /proc/self/stack | [<0>] stack_trace_save_tsk+0xa4/0x110 | [<0>] proc_pid_stack+0xc4/0x140 | [<0>] proc_single_show+0x6c/0x120 | [<0>] seq_read_iter+0x240/0x4e0 | [<0>] seq_read+0xe8/0x140 | [<0>] vfs_read+0xb8/0x1e4 | [<0>] ksys_read+0x74/0x100 | [<0>] __arm64_sys_read+0x28/0x3c | [<0>] invoke_syscall+0x50/0x120 | [<0>] el0_svc_common.constprop.0+0xc4/0xd4 | [<0>] do_el0_svc+0x30/0x9c | [<0>] el0t_64_sync_handler+0x1a8/0x1b0 | [<0>] el0t_64_sync+0x198/0x19c After: | # cat /proc/self/stack | [<0>] proc_pid_stack+0xc4/0x140 | [<0>] proc_single_show+0x6c/0x120 | [<0>] seq_read_iter+0x240/0x4e0 | [<0>] seq_read+0xe8/0x140 | [<0>] vfs_read+0xb8/0x1e4 | [<0>] ksys_read+0x74/0x100 | [<0>] __arm64_sys_read+0x28/0x3c | [<0>] invoke_syscall+0x50/0x120 | [<0>] el0_svc_common.constprop.0+0xc4/0xd4 | [<0>] do_el0_svc+0x30/0x9c | [<0>] el0_svc+0x2c/0x54 | [<0>] el0t_64_sync_handler+0x1a8/0x1b0 | [<0>] el0t_64_sync+0x198/0x19c | # echo function_graph > /sys/kernel/tracing/current_tracer | # cat /proc/self/stack | [<0>] proc_pid_stack+0xc4/0x140 | [<0>] proc_single_show+0x6c/0x120 | [<0>] seq_read_iter+0x240/0x4e0 | [<0>] seq_read+0xe8/0x140 | [<0>] vfs_read+0xb8/0x1e4 | [<0>] ksys_read+0x74/0x100 | [<0>] __arm64_sys_read+0x28/0x3c | [<0>] invoke_syscall+0x50/0x120 | [<0>] el0_svc_common.constprop.0+0xc4/0xd4 | [<0>] do_el0_svc+0x30/0x9c | [<0>] el0_svc+0x2c/0x54 | [<0>] el0t_64_sync_handler+0x1a8/0x1b0 | [<0>] el0t_64_sync+0x198/0x19c Cc: <stable@vger.kernel.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviwed-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210802164845.45506-3-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
Due to a copy-paste error, we describe struct stackframe::pc as a snapshot of the `fp` field rather than the `lr` field. Fix the comment. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210802164845.45506-2-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Barry Song 提交于
Obviously kaslr is setting the module region to 2GB rather than 4GB since commit b2eed9b5 ("arm64/kernel: kaslr: reduce module randomization range to 2 GB"). So fix the size of region in Kconfig. On the other hand, even though RANDOMIZE_MODULE_REGION_FULL is not set, module_alloc() can fall back to a 2GB window if ARM64_MODULE_PLTS is set. In this case, veneers are still needed. !RANDOMIZE_MODULE_REGION_FULL doesn't necessarily mean veneers are not needed. So fix the doc to be more precise to avoid any confusion to the readers of the code. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@arm.com> Cc: Qi Liu <liuqi115@huawei.com> Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210730125131.13724-1-song.bao.hua@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Masahiro Yamada 提交于
Commit 987fdfec ("arm64: move --fix-cortex-a53-843419 linker test to Kconfig") fixed the false-positive warning in the installation step. Yet, there are some cases where this false-positive is shown. For example, you can see it when you cross 987fdfec during git-bisect. $ git checkout 987fdfec^ [ snip ] $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig all [ snip ] $ git checkout v5.13 [ snip] $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig all [ snip ] arch/arm64/Makefile:25: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum In the stale include/config/auto.config, CONFIG_ARM64_ERRATUM_843419=y is set without CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419, so the warning is displayed while parsing the Makefiles. Make will restart with the updated include/config/auto.config, hence CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419 will be set eventually, but this warning is a surprise for users. Commit 25896d07 ("x86/build: Fix compiler support check for CONFIG_RETPOLINE") addressed a similar issue. Move $(warning ...) out of the parse stage of Makefiles. The same applies to CONFIG_ARM64_USE_LSE_ATOMICS. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org> Link: https://lore.kernel.org/r/20210801053525.105235-1-masahiroy@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
Due to inconsistencies in the way we manipulate compat GPRs, we have a few issues today: * For audit and tracing, where error codes are handled as a (native) long, negative error codes are expected to be sign-extended to the native 64-bits, or they may fail to be matched correctly. Thus a syscall which fails with an error may erroneously be identified as failing. * For ptrace, *all* compat return values should be sign-extended for consistency with 32-bit arm, but we currently only do this for negative return codes. * As we may transiently set the upper 32 bits of some compat GPRs while in the kernel, these can be sampled by perf, which is somewhat confusing. This means that where a syscall returns a pointer above 2G, this will be sign-extended, but will not be mistaken for an error as error codes are constrained to the inclusive range [-4096, -1] where no user pointer can exist. To fix all of these, we must consistently use helpers to get/set the compat GPRs, ensuring that we never write the upper 32 bits of the return code, and always sign-extend when reading the return code. This patch does so, with the following changes: * We re-organise syscall_get_return_value() to always sign-extend for compat tasks, and reimplement syscall_get_error() atop. We update syscall_trace_exit() to use syscall_get_return_value(). * We consistently use syscall_set_return_value() to set the return value, ensureing the upper 32 bits are never set unexpectedly. * As the core audit code currently uses regs_return_value() rather than syscall_get_return_value(), we special-case this for compat_user_mode(regs) such that this will do the right thing. Going forward, we should try to move the core audit code over to syscall_get_return_value(). Cc: <stable@vger.kernel.org> Reported-by: NHe Zhe <zhe.he@windriver.com> Reported-by: Nweiyuchen <weiyuchen3@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210802104200.21390-1-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-