- 27 1月, 2018 1 次提交
-
-
由 Steve Capper 提交于
In cpu_do_switch_mm(.) with ARM64_SW_TTBR0_PAN=y we apply phys_to_ttbr to a value that already has an ASID inserted into the upper bits. For 52-bit PA configurations this then can give us TTBR0_EL1 registers that cause translation table walks to attempt to access non-zero PA[51:48] spuriously. Ultimately leading to a Synchronous External Abort on level 1 translation. This patch re-arranges the logic in cpu_do_switch_mm(.) such that phys_to_ttbr is called before the ASID is inserted into the TTBR0 value. Fixes: 6b88a32c ("arm64: kpti: Fix the interaction between ASID switching and software PAN") Acked-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Tested-by: NKristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the active ASID to decide whether user access was enabled (non-zero ASID) when the exception was taken. On return from exception, if user access was previously disabled, it re-instates TTBR0_EL1 from the per-thread saved value (updated in switch_mm() or efi_set_pgd()). Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the __uaccess_ttbr0_disable() function and asm macro to first write the reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an exception occurs between these two, the exception return code will re-instate a valid TTBR0_EL1. Similar scenario can happen in cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID update in cpu_do_switch_mm(). This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and disables the interrupts around the TTBR0_EL1 and ASID switching code in __uaccess_ttbr0_disable(). It also ensures that, when returning from the EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}. The accesses to current_thread_info()->ttbr0 are updated to use READ_ONCE/WRITE_ONCE. As a safety measure, __uaccess_ttbr0_enable() always masks out any existing non-zero ASID TTBR1_EL1 before writing in the new ASID. Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") Acked-by: NWill Deacon <will.deacon@arm.com> Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 1月, 2018 2 次提交
-
-
由 James Morse 提交于
KVM would like to consume any pending SError (or RAS error) after guest exit. Today it has to unmask SError and use dsb+isb to synchronise the CPU. With the RAS extensions we can use ESB to synchronise any pending SError. Add the necessary macros to allow DISR to be read and converted to an ESR. We clear the DISR register when we enable the RAS cpufeature, and the kernel has not executed any ESB instructions. Any value we find in DISR must have belonged to firmware. Executing an ESB instruction is the only way to update DISR, so we can expect firmware to have handled any deferred SError. By the same logic we clear DISR in the idle path. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
__cpu_setup() configures SCTLR_EL1 using some hard coded hex masks, and el2_setup() duplicates some this when setting RES1 bits. Lets make this the same as KVM's hyp_init, which uses named bits. First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0} bits, and those we want to set or clear. Add a build_bug checks to ensures all bits are either set or clear. This means we don't need to preserve endian-ness configuration generated elsewhere. Finally, move the head.S and proc.S users of these hard-coded masks over to the macro versions. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2018 1 次提交
-
-
由 James Morse 提交于
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1 on VHE hosts, and allows future code to blindly access per-cpu variables without triggering world-switch. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 1月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
We will soon need to invoke a CPU-specific function pointer after changing page tables, so move post_ttbr_update_workaround out into C code to make this possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 12月, 2017 2 次提交
-
-
由 Kristina Martsenko 提交于
The top 4 bits of a 52-bit physical address are positioned at bits 2..5 in the TTBR registers. Introduce a couple of macros to move the bits there, and change all TTBR writers to use them. Leave TTBR0 PAN code unchanged, to avoid complicating it. A system with 52-bit PA will have PAN anyway (because it's ARMv8.1 or later), and a system without 52-bit PA can only use up to 48-bit PAs. A later patch in this series will add a kconfig dependency to ensure PAN is configured. In addition, when using 52-bit PA there is a special alignment requirement on the top-level table. We don't currently have any VA_BITS configuration that would violate the requirement, but one could be added in the future, so add a compile-time BUG_ON to check for it. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: added TTBR_BADD_MASK_52 comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
We currently copy the physical address size from ID_AA64MMFR0_EL1.PARange directly into TCR.(I)PS. This will not work for 4k and 16k granule kernels on systems that support 52-bit physical addresses, since 52-bit addresses are only permitted with the 64k granule. To fix this, fall back to 48 bits when configuring the PA size when the kernel does not support 52-bit PAs. When it does, fall back to 52, to avoid similar problems in the future if the PA size is ever increased above 52. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: tcr_set_pa_size macro renamed to tcr_compute_pa_size] [catalin.marinas@arm.com: comments added to tcr_compute_pa_size] [catalin.marinas@arm.com: definitions added for TCR_*PS_SHIFT] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 12月, 2017 3 次提交
-
-
由 Will Deacon 提交于
The post_ttbr0_update_workaround hook applies to any change to TTBRx_EL1. Since we're using TTBR1 for the ASID, rename the hook to make it clearer as to what it's doing. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Tested-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The pre_ttbr0_update_workaround hook is called prior to context-switching TTBR0 because Falkor erratum E1003 can cause TLB allocation with the wrong ASID if both the ASID and the base address of the TTBR are updated at the same time. With the ASID sitting safely in TTBR1, we no longer update things atomically, so we can remove the pre_ttbr0_update_workaround macro as it's no longer required. The erratum infrastructure and documentation is left around for #E1003, as it will be required by the entry trampoline code in a future patch. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Tested-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
In preparation for mapping kernelspace and userspace with different ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch TTBR0 via an invalid mapping (the zero page). Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Tested-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 11月, 2017 1 次提交
-
-
由 James Morse 提交于
There are a few places where we want to mask all exceptions. Today we do this in a piecemeal fashion, typically we expect the caller to have masked irqs and the arch code masks debug exceptions, ignoring serror which is probably masked. Make it clear that 'mask all exceptions' is the intention by adding helpers to do exactly that. This will let us unmask SError without having to add 'oh and SError' to these paths. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 2月, 2017 1 次提交
-
-
由 Shanker Donthineni 提交于
Commit 38fd94b0 ("arm64: Work around Falkor erratum 1003") tried to work around a hardware erratum, but actually caused a system crash of its own during switch_mm: cpu_do_switch_mm+0x20/0x40 efi_virtmap_load+0x34/0x40 virt_efi_get_next_variable+0x64/0xc8 efivar_init+0x8c/0x348 efisubsys_init+0xd4/0x270 do_one_initcall+0x80/0x110 kernel_init_freeable+0x19c/0x240 kernel_init+0x10/0x100 ret_from_fork+0x10/0x50 Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b In cpu_do_switch_mm, x1 contains the mm_struct pointer, which needs to be preserved by the pre_ttbr0_update_workaround macro rather than passed as a temporary. This patch clobbers x2 and x3 instead, keeping the mm_struct intact after the workaround has run. Fixes: 38fd94b0 ("arm64: Work around Falkor erratum 1003") Tested-by: NManoj Iyer <manoj.iyer@canonical.com> Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 2月, 2017 1 次提交
-
-
由 Christopher Covington 提交于
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum is triggered, page table entries using the new translation table base address (BADDR) will be allocated into the TLB using the old ASID. All circumstances leading to the incorrect ASID being cached in the TLB arise when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory operation is in the process of performing a translation using the specific TTBRx_EL1 being written, and the memory operation uses a translation table descriptor designated as non-global. EL2 and EL3 code changing the EL1&0 ASID is not subject to this erratum because hardware is prohibited from performing translations from an out-of-context translation regime. Consider the following pseudo code. write new BADDR and ASID values to TTBRx_EL1 Replacing the above sequence with the one below will ensure that no TLB entries with an incorrect ASID are used by software. write reserved value to TTBRx_EL1[ASID] ISB write new value to TTBRx_EL1[BADDR] ISB write new value to TTBRx_EL1[ASID] ISB When the above sequence is used, page table entries using the new BADDR value may still be incorrectly allocated into the TLB using the reserved ASID. Yet this will not reduce functionality, since TLB entries incorrectly tagged with the reserved ASID will never be hit by a later instruction. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 11月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
This patch takes the errata workaround code out of cpu_do_switch_mm into a dedicated post_ttbr0_update_workaround macro which will be reused in a subsequent patch. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 11月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
When returning from idle, we rely on the fact that thread_info lives at the end of the kernel stack, and restore this by masking the saved stack pointer. Subsequent patches will sever the relationship between the stack and thread_info, and to cater for this we must save/restore sp_el0 explicitly, storing it in cpu_suspend_ctx. As cpu_suspend_ctx must be doubleword aligned, this leaves us with an extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1 in the same way, which simplifies the code, avoiding pointer chasing on the restore path (as we no longer need to load thread_info::cpu followed by the relevant slot in __per_cpu_offset based on this). This patch stashes both registers in cpu_suspend_ctx. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 9月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Make use of the new alternative_if and alternative_else_nop_endif and get rid of our homebew NOP sleds, making the code simpler to read. Note that for cpu_do_switch_mm the ret has been moved out of the alternative sequence, and in the default case there will be three additional NOPs executed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 9月, 2016 1 次提交
-
-
由 James Morse 提交于
Changes to make the resume from cpu_suspend() code behave more like secondary boot caused debug exceptions to be unmasked early by __cpu_setup(). We then go on to restore mdscr_el1 in cpu_do_resume(), potentially taking break or watch points based on uninitialised registers. Mask debug exceptions in cpu_do_resume(), which is specific to resume from cpu_suspend(). Debug exceptions will be restored to their original state by local_dbg_restore() in cpu_suspend(), which runs after hw_breakpoint_restore() has re-initialised the other registers. Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Fixes: cabe1c81 ("arm64: Change cpu_resume() to enable mmu early then access sleep_sp by va") Cc: <stable@vger.kernel.org> # 4.7+ Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 8月, 2016 1 次提交
-
-
由 James Morse 提交于
Resume from hibernate needs to clean any text executed by the kernel with the MMU off to the PoC. Collect these functions together into the .idmap.text section as all this code is tightly coupled and also needs the same cleaning after resume. Data is more complicated, secondary_holding_pen_release is written with the MMU on, clean and invalidated, then read with the MMU off. In contrast __boot_cpu_mode is written with the MMU off, the corresponding cache line is invalidated, so when we read it with the MMU on we don't get stale data. These cache maintenance operations conflict with each other if the values are within a Cache Writeback Granule (CWG) of each other. Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write, the linker script ensures mmuoff.data.write section is aligned to the architectural maximum CWG of 2KB. Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 7月, 2016 1 次提交
-
-
由 Will Deacon 提交于
Clearing PSTATE.D is one of the requirements for generating a debug exception. The arm64 booting protocol requires that PSTATE.D is set, since many of the debug registers (for example, the hw_breakpoint registers) are UNKNOWN out of reset and could potentially generate spurious, fatal debug exceptions in early boot code if PSTATE.D was clear. Once the debug registers have been safely initialised, PSTATE.D is cleared, however this is currently broken for two reasons: (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall runs after SMP (and the scheduler) have been initialised, there is no guarantee that it is actually running on the boot CPU. In this case, the boot CPU is left with PSTATE.D set and is not capable of generating debug exceptions. (2) In a preemptible kernel, we may explicitly schedule on the IRQ return path to EL1. If an IRQ occurs with PSTATE.D set in the idle thread, then we may schedule the kthread_init thread, run the postcore_initcall to clear PSTATE.D and then context switch back to the idle thread before returning from the IRQ. The exception return path will then restore PSTATE.D from the stack, and set it again. This patch fixes the problem by moving the clearing of PSTATE.D earlier to proc.S. This has the desirable effect of clearing it in one place for all CPUs, long before we have to worry about the scheduler or any exception handling. We ensure that the previous reset of MDSCR_EL1 has completed before unmasking the exception, so that any spurious exceptions resulting from UNKNOWN debug registers are not generated. Without this patch applied, the kprobes selftests have been seen to fail under KVM, where we end up attempting to step the OOL instruction buffer with PSTATE.D set and therefore fail to complete the step. Cc: <stable@vger.kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 4月, 2016 2 次提交
-
-
由 James Morse 提交于
By enabling the MMU early in cpu_resume(), the sleep_save_sp and stack can be accessed by VA, which avoids the need to convert-addresses and clean to PoC on the suspend path. MMU setup is shared with the boot path, meaning the swapper_pg_dir is restored directly: ttbr1_el1 is no longer saved/restored. struct sleep_save_sp is removed, replacing it with a single array of pointers. cpu_do_{suspend,resume} could be further reduced to not restore: cpacr_el1, mdscr_el1, tcr_el1, vbar_el1 and sctlr_el1, all of which are set by __cpu_setup(). However these values all contain res0 bits that may be used to enable future features. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Geoff Levand 提交于
To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to be used outside the mm code move the contents of proc-macros.S to asm/assembler.h. Also, delete proc-macros.S, and fix up all references to proc-macros.S. Signed-off-by: NGeoff Levand <geoff@infradead.org> Acked-by: NPavel Machek <pavel@ucw.cz> [rebased, included dcache_by_line_op] Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 2月, 2016 1 次提交
-
-
由 Andrew Pinski 提交于
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 2月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
If page tables are modified without suitable TLB maintenance, the ARM architecture permits multiple TLB entries to be allocated for the same VA. When this occurs, it is permitted that TLB conflict aborts are raised in response to synchronous data/instruction accesses, and/or and amalgamation of the TLB entries may be used as a result of a TLB lookup. The presence of conflicting TLB entries may result in a variety of behaviours detrimental to the system (e.g. erroneous physical addresses may be used by I-cache fetches and/or page table walks). Some of these cases may result in unexpected changes of hardware state, and/or result in the (asynchronous) delivery of SError. To avoid these issues, we must avoid situations where conflicting entries may be allocated into TLBs. For user and module mappings we can follow a strict break-before-make approach, but this cannot work for modifications to the swapper page tables that cover the kernel text and data. Instead, this patch adds code which is intended to be executed from the idmap, which can safely unmap the swapper page tables as it only requires the idmap to be active. This enables us to uninstall the active TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry without potentially unmapping code or data required for the sequence. This avoids the risk of conflict, but requires that updates are staged in a copy of the swapper page tables prior to being installed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 1月, 2016 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The Performance Monitors extension is an optional feature of the AArch64 architecture, therefore, in order to access Performance Monitors registers safely, the kernel should detect the architected PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field before accessing them. This patch implements a guard by reading the ID_AA64DFR0_EL1 register PMUVer field to detect the architected PMU presence and prevent accessing PMU system registers if the Performance Monitors extension is not implemented in the core. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Fixes: 60792ad3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore") Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 12月, 2015 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The pmuserenr_el0 register value is architecturally UNKNOWN on reset. Current kernel code resets that register value iff the core pmu device is correctly probed in the kernel. On platforms with missing DT pmu nodes (or disabled perf events in the kernel), the pmu is not probed, therefore the pmuserenr_el0 register is not reset in the kernel, which means that its value retains the reset value that is architecturally UNKNOWN (system may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access is available at EL0, which must be disallowed). This patch adds code that resets pmuserenr_el0 on cold boot and restores it on core resume from shutdown, so that the pmuserenr_el0 setup is always enforced in the kernel. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 12月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
We drop __cpu_setup in .text.init, which ends up being part of .text. The .text.init section was a legacy section name which has been unused elsewhere for a long time. The ".text.init" name is misleading if read as a synonym for ".init.text". Any CPU may execute __cpu_setup before turning the MMU on, so it should simply live in .text. Remove the pointless section assignment. This will leave __cpu_setup in the .text section. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
This patch turns on the 16K page support in the kernel. We support 48bit VA (4 level page tables) and 47bit VA (3 level page tables). With 16K we can map 128 entries using contiguous bit hint at level 3 to map 2M using single TLB entry. TODO: 16K supports 32 contiguous entries at level 2 to get us 1G(which is not yet supported by the infrastructure). That should be a separate patch altogether. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 10月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
When cold-booting a CPU, we must invalidate any junk entries from the local TLB prior to enabling the MMU. This doesn't require broadcasting within the inner-shareable domain, so de-scope the operation to apply only to the local CPU. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
We don't want to expose the DCC to userspace, particularly as there is a kernel console driver for it. This patch resets mdscr_el1 to disable userspace access to the DCC registers on the cold boot path. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 8月, 2015 1 次提交
-
-
UEFI spec 2.5 section 2.3.6.1 defines that EFI_MEMORY_[UC|WC|WT|WB] are possible EFI memory types for AArch64. Each of those EFI memory types is mapped to a corresponding AArch64 memory type. So we need to define PROT_DEVICE_nGnRnE and PROT_NORMWL_WT additionaly. MT_NORMAL_WT is defined, and its encoding is added to MAIR_EL1 when initializing the CPU. Signed-off-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-6-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8.1 architecture extensions introduce support for hardware updates of the access and dirty information in page table entries. With TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit cleared in the page table, instead of raising an access flag fault the CPU sets the actual page table entry bit. To ensure that kernel modifications to the page tables do not inadvertently revert a change introduced by hardware updates, the exclusive monitor (ldxr/stxr) is adopted in the pte accessors. When TCR_EL1.HD is enabled, a write access to a memory location with the DBM (Dirty Bit Management) bit set in the corresponding pte automatically clears the read-only bit (AP[2]). Such DBM bit maps onto the Linux PTE_WRITE bit and to check whether a writable (DBM set) page is dirty, the kernel tests the PTE_RDONLY bit. In order to allow read-only and dirty pages, the kernel needs to preserve the software dirty bit. The hardware dirty status is transferred to the software dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and pte_modify(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 5月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 1月, 2015 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM64_CPU_SUSPEND config option was introduced to make code providing context save/restore selectable only on platforms requiring power management capabilities. Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which in turn is set by the SUSPEND config option. The introduction of CPU_IDLE for arm64 requires that code configured by ARM64_CPU_SUSPEND (context save/restore) should be compiled in in order to enable the CPU idle driver to rely on CPU operations carrying out context save/restore. The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP) failed dependencies, which is not a clean way of handling the kernel configuration option. For these reasons, this patch removes the ARM64_CPU_SUSPEND config option and makes the context save/restore dependent on CPU_PM, which is selected whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies in the process. This way, code previously configured through ARM64_CPU_SUSPEND is compiled in whenever a power management subsystem requires it to be present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour expected on ARM64 kernels. The cpu_suspend and cpu_init_idle CPU operations are added only if CPU_IDLE is selected, since they are CPU_IDLE specific methods and should be grouped and defined accordingly. PSCI CPU operations are updated to reflect the introduced changes. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
We initialise the SCTLR_EL1 value by read-modify-writeback of the desired bits, leaving the other bits (including reserved bits(RESx)) untouched. However, sometimes the boot monitor could leave garbage values in the RESx bits which could have different implications. This patch makes sure that all the bits, including the RESx bits, are set to the proper state, except for the 'endianness' control bits, EE(25) & E0E(24)- which are set early in the el2_setup. Updated the state of the Bit[6] in the comment to RES0 in the comment. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Arun Chandran 提交于
The current soft_restart() and setup_restart implementations incorrectly assume that compiler will not spill/fill values to/from stack. However this assumption seems to be wrong, revealed by the disassembly of the currently existing code (v3.16) built with Linaro GCC 4.9-2014.05. ffffffc000085224 <soft_restart>: ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]! ffffffc000085228: 910003fd mov x29, sp ffffffc00008522c: f9000fa0 str x0, [x29,#24] ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot> ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all> ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off> ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all> ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices> ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40> ffffffc000085250: 91290021 add x1, x1, #0xa40 ffffffc000085254: 8b010041 add x1, x2, x1 ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944 ffffffc00008525c: 8b020021 add x1, x1, x2 ffffffc000085260: d63f0020 blr x1 ... Here the compiler generates memory accesses after the cache is disabled, loading stale values for the spilled value and global variable. As we cannot control when the compiler will access memory we must rewrite the functions in assembly to stash values we need in registers prior to disabling the cache, avoiding the use of memory. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArun Chandran <achandran@mvista.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-