- 05 6月, 2021 1 次提交
-
-
由 Peter Collingbourne 提交于
Currently, on an anonymous page fault, the kernel allocates a zeroed page and maps it in user space. If the mapping is tagged (PROT_MTE), set_pte_at() additionally clears the tags. It is, however, more efficient to clear the tags at the same time as zeroing the data on allocation. To avoid clearing the tags on any page (which may not be mapped as tagged), only do this if the vma flags contain VM_MTE. This requires introducing a new GFP flag that is used to determine whether to clear the tags. The DC GZVA instruction with a 0 top byte (and 0 tag) requires top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of whether KASAN_HW is enabled. Signed-off-by: NPeter Collingbourne <pcc@google.com> Co-developed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210602235230.3928842-4-pcc@google.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 11 5月, 2021 1 次提交
-
-
由 Peter Collingbourne 提交于
A valid implementation choice for the ChooseRandomNonExcludedTag() pseudocode function used by IRG is to behave in the same way as with GCR_EL1.RRND=0. This would mean that RGSR_EL1.SEED is used as an LFSR which must have a non-zero value in order for IRG to properly produce pseudorandom numbers. However, RGSR_EL1 is reset to an UNKNOWN value on soft reset and thus may reset to 0. Therefore we must initialize RGSR_EL1.SEED to a non-zero value in order to ensure that IRG behaves as expected. Signed-off-by: NPeter Collingbourne <pcc@google.com> Fixes: 3b714d24 ("arm64: mte: CPU feature detection and initial sysreg configuration") Cc: <stable@vger.kernel.org> # 5.10 Link: https://linux-review.googlesource.com/id/I2b089b6c7d6f17ee37e2f0db7df5ad5bcc04526cAcked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210507185905.1745402-1-pcc@google.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 3月, 2021 2 次提交
-
-
由 Mark Rutland 提交于
In __cpu_setup we conditionally manipulate the TCR_EL1 value in x10 after previously using x10 as a scratch register for unrelated temporary variables. To make this a bit clearer, let's move the TCR_EL1 value into a named register `tcr`. To simplify the register allocation, this is placed in the highest available caller-saved scratch register, tcr. Following the example of `mair`, we initialise the register with the default value prior to any feature discovery, and write it to MAIR_EL1 after all feature discovery is complete, which allows us to simplify the featuere discovery code. The existing `mte_tcr` register is no longer needed, and is replaced by the use of x10 as a temporary, matching the rest of the MTE feature discovery assembly in __cpu_setup. As x20 is no longer used, the function is now AAPCS compliant, as we've generally aimed for in our assembly functions. There should be no functional change as as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210326180137.43119-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
In __cpu_setup we conditionally manipulate the MAIR_EL1 value in x5 before later reusing x5 as a scratch register for unrelated temporary variables. To make this a bit clearer, let's move the MAIR_EL1 value into a named register `mair`. To simplify the register allocation, this is placed in the highest available caller-saved scratch register, x17. As it is no longer clobbered by other usage, we can write the value to MAIR_EL1 at the end of the function as we do for TCR_EL1 rather than part-way though feature discovery. There should be no functional change as as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210326180137.43119-2-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 2月, 2021 2 次提交
-
-
由 Marc Zyngier 提交于
Turning the MMU on is a popular sport in the arm64 kernel, and we do it more than once, or even twice. As we are about to add even more, let's turn it into a macro. No expected functional change. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Marc Zyngier 提交于
The arm64 kernel has long be able to use more than 39bit VAs. Since day one, actually. Let's rewrite the offending comment. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-3-maz@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
- 06 1月, 2021 1 次提交
-
-
由 Catalin Marinas 提交于
Commit 49b3cf03 ("kasan: arm64: set TCR_EL1.TBID1 when enabled") set the TBID1 bit for the KASAN_SW_TAGS configuration, freeing up 8 bits to be used by PAC. With in-kernel MTE now in mainline, also set this bit for the KASAN_HW_TAGS configuration. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Peter Collingbourne <pcc@google.com> Acked-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: NAndrey Konovalov <andreyknvl@google.com>
-
- 23 12月, 2020 1 次提交
-
-
由 Vincenzo Frascino 提交于
Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) feature and requires it to be enabled. MTE supports This patch adds a new mte_enable_kernel() helper, that enables MTE in Synchronous mode in EL1 and is intended to be called from KASAN runtime during initialization. The Tag Checking operation causes a synchronous data abort as a consequence of a tag check fault when MTE is configured in synchronous mode. As part of this change enable match-all tag for EL1 to allow the kernel to access user pages without faulting. This is required because the kernel does not have knowledge of the tags set by the user in a page. Note: For MTE, the TCF bit field in SCTLR_EL1 affects only EL1 in a similar way as TCF0 affects EL0. MTE that is built on top of the Top Byte Ignore (TBI) feature hence we enable it as part of this patch as well. Link: https://lkml.kernel.org/r/7352b0a0899af65c2785416c8ca6bf3845b66fa1.1606161801.git.andreyknvl@google.comSigned-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: NAndrey Konovalov <andreyknvl@google.com> Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 12月, 2020 1 次提交
-
-
由 Mark Rutland 提交于
Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 11月, 2020 1 次提交
-
-
由 Peter Collingbourne 提交于
On hardware supporting pointer authentication, we previously ended up enabling TBI on instruction accesses when tag-based ASAN was enabled, but this was costing us 8 bits of PAC entropy, which was unnecessary since tag-based ASAN does not require TBI on instruction accesses. Get them back by setting TCR_EL1.TBID1. Signed-off-by: NPeter Collingbourne <pcc@google.com> Reviewed-by: NAndrey Konovalov <andreyknvl@google.com> Link: https://linux-review.googlesource.com/id/I3dded7824be2e70ea64df0aabab9598d5aebfcc4 Link: https://lore.kernel.org/r/20f64e26fc8a1309caa446fffcb1b4e2fe9e229f.1605952129.git.pcc@google.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 11月, 2020 1 次提交
-
-
由 Mark Rutland 提交于
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved value for TTBR{0,1}_EL1. To simplify this code, let's always allocate and use the same reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated (and hence pre-zeroed), and is also marked as read-only in the kernel Image mapping. Keeping this separate from the empty_zero_page potentially helps with robustness as the empty_zero_page is used in a number of cases where a failure to map it read-only could allow it to become corrupted. The (presently unused) swapper_pg_end symbol is also removed, and comments are added wherever we rely on the offsets between the pre-allocated pg_dirs to keep these cases easily identifiable. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 9月, 2020 2 次提交
-
-
由 Vincenzo Frascino 提交于
Add the cpufeature and hwcap entries to detect the presence of MTE. Any secondary CPU not supporting the feature, if detected on the boot CPU, will be parked. Add the minimum SCTLR_EL1 and HCR_EL2 bits for enabling MTE. The Normal Tagged memory type is configured in MAIR_EL1 before the MMU is enabled in order to avoid disrupting other CPUs in the CnP domain. Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
-
由 Catalin Marinas 提交于
Once user space is given access to tagged memory, the kernel must be able to clear/save/restore tags visible to the user. This is done via the linear mapping, therefore map it as such. The new MT_NORMAL_TAGGED index for MAIR_EL1 is initially mapped as Normal memory and later changed to Normal Tagged via the cpufeature infrastructure. From a mismatched attribute aliases perspective, the Tagged memory is considered a permission and it won't lead to undefined behaviour. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
-
- 10 6月, 2020 2 次提交
-
-
由 Mike Rapoport 提交于
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with the aid of the below script and manual adjustments here and there. import sys import re if len(sys.argv) is not 3: print "USAGE: %s <file> <header>" % (sys.argv[0]) sys.exit(1) hdr_to_move="#include <linux/%s>" % sys.argv[2] moved = False in_hdrs = False with open(sys.argv[1], "r") as f: lines = f.readlines() for _line in lines: line = _line.rstrip(' ') if line == hdr_to_move: continue if line.startswith("#include <linux/"): in_hdrs = True elif not moved and in_hdrs: moved = True print hdr_to_move print line Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 5月, 2020 1 次提交
-
-
由 Sami Tolvanen 提交于
Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 28 4月, 2020 2 次提交
-
-
由 Mark Rutland 提交于
Currently __cpu_setup conditionally initializes the address authentication keys and enables them in SCTLR_EL1, doing so differently for the primary CPU and secondary CPUs, and skipping this work for CPUs returning from an idle state. For the latter case, cpu_do_resume restores the keys and SCTLR_EL1 value after the MMU has been enabled. This flow is rather difficult to follow, so instead let's move the primary and secondary CPU initialization into their respective boot paths. By following the example of cpu_do_resume and doing so once the MMU is enabled, we can always initialize the keys from the values in thread_struct, and avoid the machinery necessary to pass the keys in secondary_data or open-coding initialization for the boot CPU. This means we perform an additional RMW of SCTLR_EL1, but we already do this in the cpu_do_resume path, and for other features in cpufeature.c, so this isn't a major concern in a bringup path. Note that even while the enable bits are clear, the key registers are accessible. As this now renders the argument to __cpu_setup redundant, let's also remove that entirely. Future extensions can follow a similar approach to initialize values that differ for primary/secondary CPUs. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mark Rutland 提交于
The 'sync' argument to ptrauth_keys_install_kernel macro is somewhat opaque at callsites, so instead lets have regular and _nosync variants of the macro to make this a little more obvious. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200423101606.37601-2-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
-
- 24 3月, 2020 1 次提交
-
-
由 Remi Denis-Courmont 提交于
In practice, this requires only 2 instructions, or even only 1 for the idmap_pg_dir size (with 4 or 64 KiB pages). Only the MAIR values needed more than 2 instructions and it was already converted to mov_q by 95b3f74b. Signed-off-by: NRemi Denis-Courmont <remi.denis.courmont@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 18 3月, 2020 4 次提交
-
-
由 Amit Daniel Kachhap 提交于
This patch restores the kernel keys from current task during cpu resume after the mmu is turned on and ptrauth is enabled. A flag is added in macro ptrauth_keys_install_kernel to check if isb instruction needs to be executed. Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Set up keys to use pointer authentication within the kernel. The kernel will be compiled with APIAKey instructions, the other keys are currently unused. Each task is given its own APIAKey, which is initialized during fork. The key is changed during context switch and on kernel entry from EL0. The keys for idle threads need to be set before calling any C functions, because it is not possible to enter and exit a function with different keys. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [Amit: Modified secondary cores key structure, comments] Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this. Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps). Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then the late CPU is parked by the cpufeature framework. This is possible as kernel will only have NOP space intructions for PAC so such mismatched late cpu will silently ignore those instructions in C functions. Also, if the boot CPU does not have address auth and the late CPU has then the late cpu will still boot but with ptrauth feature disabled. Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Amit Daniel Kachhap 提交于
This patch allows __cpu_setup to be invoked with one of these flags, ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME. This is required as some cpufeatures need different handling during different scenarios. The input parameter in x0 is preserved till the end to be used inside this function. There should be no functional change with this patch and is useful for the subsequent ptrauth patch which utilizes it. Some upcoming arm cpufeatures can also utilize these flags. Suggested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 3月, 2020 1 次提交
-
-
由 Ionela Voinescu 提交于
The activity monitors extension is an optional extension introduced by the ARMv8.4 CPU architecture. In order to access the activity monitors counters safely, if desired, the kernel should detect the presence of the extension through the feature register, and mediate the access. Therefore, disable direct accesses to activity monitors counters from EL0 (userspace) and trap them to EL1 (kernel). To be noted that the ARM64_AMU_EXTN kernel config does not have an effect on this code. Given that the amuserenr_el0 resets to an UNKNOWN value, setting the trap of EL0 accesses to EL1 is always attempted for safety and security considerations. Therefore firmware should still ensure accesses to AMU registers are not trapped in EL2/EL3 as this code cannot be bypassed if the CPU implements the Activity Monitors Unit. Signed-off-by: NIonela Voinescu <ionela.voinescu@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Reviewed-by: NValentin Schneider <valentin.schneider@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 2月, 2020 1 次提交
-
-
由 Mark Rutland 提交于
There's no reason that cpu_do_switch_mm() needs to be written as an assembly function, and having it as a C function would make it easier to maintain. This patch converts cpu_do_switch_mm() to C, removing code that this change makes redundant (e.g. the mmid macro). Since the header comment was stale and the prototype now implies all the necessary information, this comment is removed. The 'pgd_phys' argument is made a phys_addr_t to match the return type of virt_to_phys(). At the same time, post_ttbr_update_workaround() is updated to use IS_ENABLED(), which allows the compiler to figure out it can elide calls for !CONFIG_CAVIUM_ERRATUM_27456 builds. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> [catalin.marinas@arm.com: change comments from asm-style to C-style] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2020 2 次提交
-
-
由 Catalin Marinas 提交于
Currently, the arm64 __cpu_setup has hard-coded constants for the memory attributes that go into the MAIR_EL1 register. Define proper macros in asm/sysreg.h and make use of them in proc.S. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Sami Tolvanen 提交于
idmap_kpti_install_ng_mappings uses x18 as a temporary register, which will result in a conflict when x18 is reserved. Use x16 and x17 instead where needed. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 08 1月, 2020 1 次提交
-
-
由 Mark Brown 提交于
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the mm code to the new macros. Even the functions called from non-standard environments like idmap have no special requirements on their environments so can be treated like regular functions. Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 28 8月, 2019 1 次提交
-
-
由 Mark Rutland 提交于
While the MMUs is disabled, I-cache speculation can result in instructions being fetched from the PoC. During boot we may patch instructions (e.g. for alternatives and jump labels), and these may be dirty at the PoU (and stale at the PoC). Thus, while the MMU is disabled in the KPTI pagetable fixup code we may load stale instructions into the I-cache, potentially leading to subsequent crashes when executing regions of code which have been modified at runtime. Similarly to commit: 8ec41987 ("arm64: mm: ensure patched kernel text is fetched from PoU") ... we can invalidate the I-cache after enabling the MMU to prevent such issues. The KPTI pagetable fixup code itself should be clean to the PoC per the boot protocol, so no maintenance is required for this code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 09 8月, 2019 3 次提交
-
-
由 Steve Capper 提交于
Previous patches have enabled 52-bit kernel + user VAs and there is no longer any scenario where user VA != kernel VA size. This patch removes the, now redundant, vabits_user variable and replaces usage with vabits_actual where appropriate. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
Most of the machinery is now in place to enable 52-bit kernel VAs that are detectable at boot time. This patch adds a Kconfig option for 52-bit user and kernel addresses and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ, physvirt_offset and vmemmap at early boot. To simplify things this patch also removes the 52-bit user/48-bit kernel kconfig option. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Steve Capper 提交于
When running with a 52-bit userspace VA and a 48-bit kernel VA we offset ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to be used for both userspace and kernel. Moving on to a 52-bit kernel VA we no longer require this offset to ttbr1_el1 should we be running on a system with HW support for 52-bit VAs. This patch introduces conditional logic to offset_ttbr1 to query SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW support for 52-bit VAs then the ttbr1 offset is skipped. We choose to read a system register rather than vabits_actual because offset_ttbr1 can be called in places where the kernel data is not actually mapped. Calls to offset_ttbr1 appear to be made from rarely called code paths so this extra logic is not expected to adversely affect performance. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 19 6月, 2019 1 次提交
-
-
由 Thomas Gleixner 提交于
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NAlexios Zavras <alexios.zavras@intel.com> Reviewed-by: NAllison Randal <allison@lohutok.net> Reviewed-by: NEnrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 4月, 2019 1 次提交
-
-
由 Jean-Philippe Brucker 提交于
When the CPU comes out of suspend, the firmware may have modified the OS Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and restore it on resume. Cc: <stable@vger.kernel.org> Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 3月, 2019 1 次提交
-
-
由 Zhang Lei 提交于
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause an undefined fault (Data abort, DFSC=0b111111). This fault occurs under a specific hardware condition when a load/store instruction performs an address translation. Any load/store instruction, except non-fault access including Armv8 and SVE might cause this undefined fault. The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE is enabled to mitigate timing attacks against KASLR where the kernel address space could be probed using the FFR and suppressed fault on SVE loads. Since this erratum causes spurious exceptions, which may corrupt the exception registers, we clear the TCR_ELx.NFDx=1 bits when booting on an affected CPU. Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com> [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler and always disabled the NFDx bits on affected CPUs] Signed-off-by: NJames Morse <james.morse@arm.com> Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 2月, 2019 1 次提交
-
-
由 Julien Thierry 提交于
CPU does not received signals for interrupts with a priority masked by ICC_PMR_EL1. This means the CPU might not come back from a WFI instruction. Make sure ICC_PMR_EL1 does not mask interrupts when doing a WFI. Since the logic of cpu_do_idle is becoming a bit more complex than just two instructions, lets turn it from ASM to C. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 12月, 2018 1 次提交
-
-
由 Andrey Konovalov 提交于
Tag-based KASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit, which enables Top Byte Ignore for the kernel, when tag-based KASAN is used. Link: http://lkml.kernel.org/r/f51eca084c8cdb2f3a55195fe342dc8953b7aead.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 12月, 2018 3 次提交
-
-
由 Will Deacon 提交于
Enabling 52-bit VAs for userspace is pretty confusing, since it requires you to select "48-bit" virtual addressing in the Kconfig. Rework the logic so that 52-bit user virtual addressing is advertised in the "Virtual address space size" choice, along with some help text to describe its interaction with Pointer Authentication. The EXPERT-only option to force all user mappings to the 52-bit range is then made available immediately below the VA size selection. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Steve Capper 提交于
On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, some changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Steve Capper 提交于
Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64 entries (for the 48-bit case) to 1024 entries. This quantity, PTRS_PER_PGD is used as follows to compute which PGD entry corresponds to a given virtual address, addr: pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1) Userspace addresses are prefixed by 0's, so for a 48-bit userspace address, uva, the following is true: (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1) In other words, a 48-bit userspace address will have the same pgd_index when using PTRS_PER_PGD = 64 and 1024. Kernel addresses are prefixed by 1's so, given a 48-bit kernel address, kva, we have the following inequality: (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1) In other words a 48-bit kernel virtual address will have a different pgd_index when using PTRS_PER_PGD = 64 and 1024. If, however, we note that: kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b) and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE) We can consider: (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1) = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F // "lower" cancels out = 0x3C0 In other words, one can switch PTRS_PER_PGD to the 52-bit value globally provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16). For kernel configuration where 52-bit userspace VAs are possible, this patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the 52-bit value. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> [will: added comment to TTBR1_BADDR_4852_OFFSET calculation] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-