- 09 9月, 2022 3 次提交
-
-
由 Mark Brown 提交于
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64MMFR0_EL1 to follow the convention. No functional changes. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NKristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-5-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
A recent change renamed CTR_DMINLINE_SHIFT to CTR_EL0_DminLine_SHIFT but didn't fully update CTR_CACHE_MINLINE_MASK. As CTR_CACHE_MINLINE_MASK is not used anywhere anyway, just remove it. Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NKristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-4-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
SMIDR_EL1 was converted to automatic generation but some of the constants for fields in it were mistakenly left, remove them. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NKristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-2-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 8月, 2022 5 次提交
-
-
由 Mark Brown 提交于
Currently when taking a SME access trap we allocate storage for the SVE register state in order to be able to handle storage of streaming mode SVE. Due to the original usage in a purely SVE context the SVE register state allocation this also flushes the register state for SVE if storage was already allocated but in the SME context this is not desirable. For a SME access trap to be taken the task must not be in streaming mode so either there already is SVE register state present for regular SVE mode which would be corrupted or the task does not have TIF_SVE and the flush is redundant. Fix this by adding a flag to sve_alloc() indicating if we are in a SVE context and need to flush the state. Freshly allocated storage is always zeroed either way. Fixes: 8bd7f91c ("arm64/sme: Implement traps and syscall handling for SME") Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220817182324.638214-4-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
Ard noticed that since we converted CTR_EL0 to automatic generation we have been seeing errors on some systems handling the value of cache_type_cwg() such as CPU features: No Cache Writeback Granule information, assuming 128 This is because the manual definition of CTR_EL0_CWG_MASK was done without a shift while our convention is to define the mask after shifting. This means that the user in cache_type_cwg() was broken as it was written for the manually written shift then mask. Fix this by converting to use SYS_FIELD_GET(). The only other field where the _MASK for this register is used is IminLine which is at offset 0 so unaffected. Fixes: 9a3634d0 ("arm64/sysreg: Convert CTR_EL0 to automatic generation") Reported-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-4-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
The SYS_FIELD_ macros are not safe for assembly contexts, move them inside the guarded section. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-3-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
The SYS_FIELD_ macros in sysreg.h use definitions from bitfield.h but there is no direct inclusion of it, add one to ensure that sysreg.h is directly usable. Signed-off-by: NMark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220818213613.733091-2-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
On arm64, "rodata=full" has been suppored (but not documented) since commit: c55191e9 ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") As it's necessary to determine the rodata configuration early during boot, arm64 has an early_param() handler for this, whereas init/main.c has a __setup() handler which is run later. Unfortunately, this split meant that since commit: f9a40b08 ("init/main.c: return 1 from handled __setup() functions") ... passing "rodata=full" would result in a spurious warning from the __setup() handler (though RO permissions would be configured appropriately). Further, "rodata=full" has been broken since commit: 0d6ea3ac ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()") ... which caused strtobool() to parse "full" as false (in addition to many other values not documented for the "rodata=" kernel parameter. This patch fixes this breakage by: * Moving the core parameter parser to an __early_param(), such that it is available early. * Adding an (optional) arch hook which arm64 can use to parse "full". * Updating the documentation to mention that "full" is valid for arm64. * Having the core parameter parser handle "on" and "off" explicitly, such that any undocumented values (e.g. typos such as "ful") are reported as errors rather than being silently accepted. Note that __setup() and early_param() have opposite conventions for their return values, where __setup() uses 1 to indicate a parameter was handled and early_param() uses 0 to indicate a parameter was handled. Fixes: f9a40b08 ("init/main.c: return 1 from handled __setup() functions") Fixes: 0d6ea3ac ("lib/kstrtox.c: add "false"/"true" support to kstrtobool()") Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jagdish Gediya <jvgediya@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220817154022.3974645-1-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 17 8月, 2022 1 次提交
-
-
由 Oliver Upton 提交于
KVM does not support AArch32 on asymmetric systems. To that end, enforce AArch64-only behavior on PMCR_EL1.LC when on an asymmetric system. Fixes: 2122a833 ("arm64: Allow mismatched 32-bit EL0 support") Signed-off-by: NOliver Upton <oliver.upton@linux.dev> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220816192554.1455559-2-oliver.upton@linux.dev
-
- 30 7月, 2022 1 次提交
-
-
由 Zhou Guanghui 提交于
In a system(Huawei Ascend ARM64 SoC) using HBM, a multi-bit ECC error occurs, and the BIOS will mark the corresponding area (for example, 2 MB) as unusable. When the system restarts next time, these areas are not reported or reported as EFI_UNUSABLE_MEMORY. Both cases lead to an increase in the number of memblocks, whereas EFI_UNUSABLE_MEMORY leads to a larger number of memblocks. For example, if the EFI_UNUSABLE_MEMORY type is reported: ... memory[0x92] [0x0000200834a00000-0x0000200835bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0 memory[0x93] [0x0000200835c00000-0x0000200835dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4 memory[0x94] [0x0000200835e00000-0x00002008367fffff], 0x0000000000a00000 bytes on node 7 flags: 0x0 memory[0x95] [0x0000200836800000-0x00002008369fffff], 0x0000000000200000 bytes on node 7 flags: 0x4 memory[0x96] [0x0000200836a00000-0x0000200837bfffff], 0x0000000001200000 bytes on node 7 flags: 0x0 memory[0x97] [0x0000200837c00000-0x0000200837dfffff], 0x0000000000200000 bytes on node 7 flags: 0x4 memory[0x98] [0x0000200837e00000-0x000020087fffffff], 0x0000000048200000 bytes on node 7 flags: 0x0 memory[0x99] [0x0000200880000000-0x0000200bcfffffff], 0x0000000350000000 bytes on node 6 flags: 0x0 memory[0x9a] [0x0000200bd0000000-0x0000200bd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4 memory[0x9b] [0x0000200bd0200000-0x0000200bd07fffff], 0x0000000000600000 bytes on node 6 flags: 0x0 memory[0x9c] [0x0000200bd0800000-0x0000200bd09fffff], 0x0000000000200000 bytes on node 6 flags: 0x4 memory[0x9d] [0x0000200bd0a00000-0x0000200fcfffffff], 0x00000003ff600000 bytes on node 6 flags: 0x0 memory[0x9e] [0x0000200fd0000000-0x0000200fd01fffff], 0x0000000000200000 bytes on node 6 flags: 0x4 memory[0x9f] [0x0000200fd0200000-0x0000200fffffffff], 0x000000002fe00000 bytes on node 6 flags: 0x0 ... The EFI memory map is parsed to construct the memblock arrays before the memblock arrays can be resized. As the result, memory regions beyond INIT_MEMBLOCK_REGIONS are lost. Add a new macro INIT_MEMBLOCK_MEMORY_REGIONS to replace INIT_MEMBLOCK_REGTIONS to define the size of the static memblock.memory array. Allow overriding memblock.memory array size with architecture defined INIT_MEMBLOCK_MEMORY_REGIONS and make arm64 to set INIT_MEMBLOCK_MEMORY_REGIONS to 1024 when CONFIG_EFI is enabled. Link: https://lkml.kernel.org/r/20220615102742.96450-1-zhouguanghui1@huawei.comSigned-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Acked-by: NMike Rapoport <rppt@linux.ibm.com> Tested-by: NDarren Hart <darren@os.amperecomputing.com> Acked-by: Will Deacon <will@kernel.org> [arm64] Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: Xu Qiang <xuqiang36@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 28 7月, 2022 4 次提交
-
-
由 Marc Zyngier 提交于
Implementing a new unwinder is a bit more involved than writing a couple of helpers, so let's not lure the reader into a false sense of comfort. Instead, let's point out what they should call into, and what sort of parameter they need to provide. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NOliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-7-maz@kernel.org
-
由 Marc Zyngier 提交于
kvm_nvhe_stack_kern_va() only makes sense as part of the nVHE unwinder, so simply move it there. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NOliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-5-maz@kernel.org
-
由 Marc Zyngier 提交于
Having multiple versions of on_accessible_stack() (one per unwinder) makes it very hard to reason about what is used where due to the complexity of the various includes, the forward declarations, and the reliance on everything being 'inline'. Instead, move the code back where it should be. Each unwinder implements: - on_accessible_stack() as well as the helpers it depends on, - unwind()/unwind_next(), as they pass on_accessible_stack as a parameter to unwind_next_common() (which is the only common code here) This hardly results in any duplication, and makes it much easier to reason about the code. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NOliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-4-maz@kernel.org
-
由 Marc Zyngier 提交于
The unwinding code doesn't really belong to the exit handling code. Instead, move it to a file (conveniently named stacktrace.c to confuse the reviewer), and move all the stacktrace-related stuff there. It will be joined by more code very soon. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NKalesh Singh <kaleshsingh@google.com> Tested-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NOliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-3-maz@kernel.org
-
- 26 7月, 2022 13 次提交
-
-
由 Kalesh Singh 提交于
Implements the common framework necessary for unwind() to work in the protected nVHE context: - on_accessible_stack() - on_overflow_stack() - unwind_next() Protected nVHE unwind() is used to unwind and save the hyp stack addresses to the shared stacktrace buffer. The host reads the entries in this buffer, symbolizes and dumps the stacktrace (later patch in the series). Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-17-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Add some stub implementations of protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-15-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
In protected nVHE mode the host cannot directly access hypervisor memory, so we will dump the hypervisor stacktrace to a shared buffer with the host. The minimum size for the buffer required, assuming the min frame size of [x29, x30] (2 * sizeof(long)), is half the combined size of the hypervisor and overflow stacks plus an additional entry to delimit the end of the stacktrace. The stacktrace buffers are used later in the series to dump the nVHE hypervisor stacktrace when using protected-mode. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-14-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace from EL1. This is possible beacause the host can directly access the hypervisor stack pages in non-protected mode. The nVHE backtrace is dumped on hyp_panic(), before panicking the host. [ 101.498183] kvm [377]: nVHE call trace: [ 101.498363] kvm [377]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8 [ 101.499045] kvm [377]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10 [ 101.499498] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 . . . [ 101.524929] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 101.525062] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34 [ 101.525195] kvm [377]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c [ 101.525333] kvm [377]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48 [ 101.525468] kvm [377]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128 [ 101.525602] kvm [377]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64 [ 101.525745] kvm [377]: ---[ end nVHE call trace ]--- Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-12-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Implements the common framework necessary for unwind() to work for non-protected nVHE mode: - on_accessible_stack() - on_overflow_stack() - unwind_next() Non-protected nVHE unwind() is used to unwind and dump the hypervisor stacktrace by the host in EL1 Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-11-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
In non-protected nVHE mode (non-pKVM) the host can directly access hypervisor memory; and unwinding of the hypervisor stacktrace is done from EL1 to save on memory for shared buffers. To unwind the hypervisor stack from EL1 the host needs to know the starting point for the unwind and information that will allow it to translate hypervisor stack addresses to the corresponding kernel addresses. This patch sets up this book keeping. It is made use of later in the series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-10-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Add stub implementations of non-protected nVHE stack unwinder, for building. These are implemented later in this series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-9-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Add brief description on how to use stacktrace/common.h to implement a stack unwinder. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-7-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Move unwind() to stacktrace/common.h, and as a result the kernel unwind_next() to asm/stacktrace.h. This allow reusing unwind() in the implementation of the nVHE HYP stack unwinder, later in the series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Reviewed-by: NMark Brown <broonie@kernel.org> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-6-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
The unwinder code is made reusable so that it can be used to unwind various types of stacks. One usecase is unwinding the nVHE hyp stack from the host (EL1) in non-protected mode. This means that the unwinder must be able to translate HYP stack addresses to kernel addresses. Add a callback (stack_trace_translate_fp_fn) to allow specifying the translation function. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-5-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Move common unwind_next logic to stacktrace/common.h. This allows reusing the code in the implementation the nVHE hypervisor stack unwinder, later in this series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Reviewed-by: NMark Brown <broonie@kernel.org> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-4-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
Move common on_accessible_stack checks to stacktrace/common.h. This is used in the implementation of the nVHE hypervisor unwinder later in this series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NFuad Tabba <tabba@google.com> Reviewed-by: NMark Brown <broonie@kernel.org> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-3-kaleshsingh@google.com
-
由 Kalesh Singh 提交于
In order to reuse the arm64 stack unwinding logic for the nVHE hypervisor stack, move the common code to a shared header (arch/arm64/include/asm/stacktrace/common.h). The nVHE hypervisor cannot safely link against kernel code, so we make use of the shared header to avoid duplicated logic later in this series. Signed-off-by: NKalesh Singh <kaleshsingh@google.com> Reviewed-by: NMark Brown <broonie@kernel.org> Reviewed-by: NFuad Tabba <tabba@google.com> Tested-by: NFuad Tabba <tabba@google.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-2-kaleshsingh@google.com
-
- 25 7月, 2022 1 次提交
-
-
由 Jason A. Donenfeld 提交于
The archrandom interface was originally designed for x86, which supplies RDRAND/RDSEED for receiving random words into registers, resulting in one function to generate an int and another to generate a long. However, other architectures don't follow this. On arm64, the SMCCC TRNG interface can return between one and three longs. On s390, the CPACF TRNG interface can return arbitrary amounts, with four longs having the same cost as one. On UML, the os_getrandom() interface can return arbitrary amounts. So change the api signature to take a "max_longs" parameter designating the maximum number of longs requested, and then return the number of longs generated. Since callers need to check this return value and loop anyway, each arch implementation does not bother implementing its own loop to try again to fill the maximum number of longs. Additionally, all existing callers pass in a constant max_longs parameter. Taken together, these two things mean that the codegen doesn't really change much for one-word-at-a-time platforms, while performance is greatly improved on platforms such as s390. Acked-by: NHeiko Carstens <hca@linux.ibm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
-
- 23 7月, 2022 3 次提交
-
-
由 Stafford Horne 提交于
The asm/pci.h used for many newer architectures share similar definitions. Move the common parts to asm-generic/pci.h to allow for sharing code. Suggested-by: NArnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/lkml/CAK8P3a0JmPeczfmMBE__vn=Jbvf=nkbpVaZCycyv40pZNCJJXQ@mail.gmail.com/ Link: https://lore.kernel.org/r/20220722214944.831438-5-shorne@gmail.comSigned-off-by: NStafford Horne <shorne@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NPierre Morel <pmorel@linux.ibm.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
由 Stafford Horne 提交于
The isa_dma_bridge_buggy symbol is only used for x86_32, and only x86_32 platforms or quirks ever set it. Add a new linux/isa-dma.h header that #defines isa_dma_bridge_buggy to 0 except on x86_32, where we keep it as a variable, and remove all the arch- specific definitions. [bhelgaas: commit log] Suggested-by: NArnd Bergmann <arnd@arndb.de> Suggested-by: NChristoph Hellwig <hch@infradead.org> Link: https://lore.kernel.org/r/20220722214944.831438-3-shorne@gmail.comSigned-off-by: NStafford Horne <shorne@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
-
由 Stafford Horne 提交于
pci_get_legacy_ide_irq() is only used on platforms that support PNP, so many architectures define it but never use it. Replace uses of it with ATA_PRIMARY_IRQ() and ATA_SECONDARY_IRQ(), which provide the same functionality. Since pci_get_legacy_ide_irq() is no longer used, remove all the architecture-specific definitions of it as well as asm-generic/pci.h, which only provides pci_get_legacy_ide_irq() [bhelgaas: commit log] Co-developed-by: NArnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20220722214944.831438-2-shorne@gmail.comSigned-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NStafford Horne <shorne@gmail.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NPierre Morel <pmorel@linux.ibm.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 20 7月, 2022 5 次提交
-
-
由 Mark Rutland 提交于
Since commit: a004393f ("arm64: idreg-override: use early FDT mapping in ID map") Kernels built with KASAN_INLINE=y die early in boot before producing any console output. This is because the accesses made to the FDT (e.g. in generic string processing functions) are instrumented with KASAN, and with KASAN_INLINE=y any access to an address in TTBR0 results in a bogus shadow VA, resulting in a data abort. This patch fixes this by reverting commits: 7559d9f9 ("arm64: setup: drop early FDT pointer helpers") bd0c3fa21878b6d0 ("arm64: idreg-override: use early FDT mapping in ID map") ... and using the TTBR1 fixmap mapping of the FDT. Note that due to a later commit: b65e411d ("arm64: Save state of HCR_EL2.E2H before switch to EL1") ... which altered the prototype of init_feature_override() (and invocation from head.S), commit bd0c3fa21878b6d0 does not revert cleanly, and I've fixed that up manually. Fixes: a004393f ("arm64: idreg-override: use early FDT mapping in ID map") Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220713140949.45440-1-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
The v9.2 feature FEAT_EBF16 provides support for an extended BFloat16 mode. Allow userspace to discover system support for this feature by adding a hwcap for it. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220707103632.12745-4-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Brown 提交于
When we added support for AT_HWCAP2 we took advantage of the fact that we have limited hwcaps to the low 32 bits and stored it along with AT_HWCAP in a single unsigned integer. Thanks to the ever expanding capabilities of the architecture we have now allocated all 64 of the bits in an unsigned long so in preparation for adding more hwcaps convert elf_hwcap to be a bitmap instead, with 64 bits allocated to each AT_HWCAP. There should be no functional change from this patch. Signed-off-by: NMark Brown <broonie@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220707103632.12745-3-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Barry Song 提交于
THP_SWAP has been proven to improve the swap throughput significantly on x86_64 according to commit bd4c82c2 ("mm, THP, swap: delay splitting THP after swapped out"). As long as arm64 uses 4K page size, it is quite similar with x86_64 by having 2MB PMD THP. THP_SWAP is architecture-independent, thus, enabling it on arm64 will benefit arm64 as well. A corner case is that MTE has an assumption that only base pages can be swapped. We won't enable THP_SWAP for ARM64 hardware with MTE support until MTE is reworked to coexist with THP_SWAP. A micro-benchmark is written to measure thp swapout throughput as below, unsigned long long tv_to_ms(struct timeval tv) { return tv.tv_sec * 1000 + tv.tv_usec / 1000; } main() { struct timeval tv_b, tv_e;; #define SIZE 400*1024*1024 volatile void *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (!p) { perror("fail to get memory"); exit(-1); } madvise(p, SIZE, MADV_HUGEPAGE); memset(p, 0x11, SIZE); /* write to get mem */ gettimeofday(&tv_b, NULL); madvise(p, SIZE, MADV_PAGEOUT); gettimeofday(&tv_e, NULL); printf("swp out bandwidth: %ld bytes/ms\n", SIZE/(tv_to_ms(tv_e) - tv_to_ms(tv_b))); } Testing is done on rk3568 64bit Quad Core Cortex-A55 platform - ROCK 3A. thp swp throughput w/o patch: 2734bytes/ms (mean of 10 tests) thp swp throughput w/ patch: 3331bytes/ms (mean of 10 tests) Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Steven Price <steven.price@arm.com> Cc: Yang Shi <shy828301@gmail.com> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: NBarry Song <v-songbaohua@oppo.com> Link: https://lore.kernel.org/r/20220720093737.133375-1-21cnbao@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Joey Gouly 提交于
The comment says this should be GENMASK_ULL(47, 12), so do that! GENMASK_ULL() is available in assembly since: 95b980d6 ("linux/bits.h: make BIT(), GENMASK(), and friends available in assembly") Signed-off-by: NJoey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/all/20171221164851.edxq536yobjuagwe@armageddon.cambridge.arm.com/ Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NKristina Martsenko <kristina.martsenko@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220708140056.10123-1-joey.gouly@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 18 7月, 2022 2 次提交
-
-
由 Jason A. Donenfeld 提交于
When RDRAND was introduced, there was much discussion on whether it should be trusted and how the kernel should handle that. Initially, two mechanisms cropped up, CONFIG_ARCH_RANDOM, a compile time switch, and "nordrand", a boot-time switch. Later the thinking evolved. With a properly designed RNG, using RDRAND values alone won't harm anything, even if the outputs are malicious. Rather, the issue is whether those values are being *trusted* to be good or not. And so a new set of options were introduced as the real ones that people use -- CONFIG_RANDOM_TRUST_CPU and "random.trust_cpu". With these options, RDRAND is used, but it's not always credited. So in the worst case, it does nothing, and in the best case, maybe it helps. Along the way, CONFIG_ARCH_RANDOM's meaning got sort of pulled into the center and became something certain platforms force-select. The old options don't really help with much, and it's a bit odd to have special handling for these instructions when the kernel can deal fine with the existence or untrusted existence or broken existence or non-existence of that CPU capability. Simplify the situation by removing CONFIG_ARCH_RANDOM and using the ordinary asm-generic fallback pattern instead, keeping the two options that are actually used. For now it leaves "nordrand" for now, as the removal of that will take a different route. Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NBorislav Petkov <bp@suse.de> Acked-by: NHeiko Carstens <hca@linux.ibm.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
-
由 Anshuman Khandual 提交于
This moves protection_map[] inside the platform and makes it a static. Link: https://lkml.kernel.org/r/20220711070600.2378316-6-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Brian Cain <bcain@quicinc.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Christoph Hellwig <hch@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vineet Gupta <vgupta@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 17 7月, 2022 1 次提交
-
-
由 Marc Zyngier 提交于
Having kvm_arm_sys_reg_get_reg and co in kvm_host.h gives the impression that these functions are free to be called from anywhere. Not quite. They really are tied to out internal sysreg handling, and they would be better off in the sys_regs.h header, which is private. kvm_host.h could also get a bit of a diet, so let's just do that. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 16 7月, 2022 1 次提交
-
-
由 Naveen N. Rao 提交于
Drop __weak attribute from functions in kexec_core.c: - machine_kexec_post_load() - arch_kexec_protect_crashkres() - arch_kexec_unprotect_crashkres() - crash_free_reserved_phys_range() Link: https://lkml.kernel.org/r/c0f6219e03cb399d166d518ab505095218a902dd.1656659357.git.naveen.n.rao@linux.vnet.ibm.comSigned-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Suggested-by: NEric Biederman <ebiederm@xmission.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NMimi Zohar <zohar@linux.ibm.com>
-