- 24 1月, 2018 2 次提交
-
-
由 Jayachandran C 提交于
Use PSCI based mitigation for speculative execution attacks targeting the branch predictor. We use the same mechanism as the one used for Cortex-A CPUs, we expect the PSCI version call to have a side effect of clearing the BTBs. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJayachandran C <jnair@caviumnetworks.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
When a CPU is brought up after we have finalised the system wide capabilities (i.e, features and errata), we make sure the new CPU doesn't need a new errata work around which has not been detected already. However we don't run enable() method on the new CPU for the errata work arounds already detected. This could cause the new CPU running without potential work arounds. It is upto the "enable()" method to decide if this CPU should do something about the errata. Fixes: commit 6a6efbb4 ("arm64: Verify CPU errata work arounds on hotplugged CPU") Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Dave Martin <dave.martin@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
We call arm64_apply_bp_hardening() from post_ttbr_update_workaround, which has the unexpected consequence of being triggered on every exception return to userspace when ARM64_SW_TTBR0_PAN is selected, even if no context switch actually occured. This is a bit suboptimal, and it would be more logical to only invalidate the branch predictor when we actually switch to a different mm. In order to solve this, move the call to arm64_apply_bp_hardening() into check_and_switch_context(), where we're guaranteed to pick a different mm context. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 1月, 2018 1 次提交
-
-
由 Kristina Martsenko 提交于
When booting a kernel without 52-bit PA support (e.g. a kernel with 4k pages) on a system with 52-bit memory, the kernel will currently try to use the 52-bit memory and crash. Fix this by ignoring any memory higher than what the kernel supports. Fixes: f77d2817 ("arm64: enable 52-bit physical address support") Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the active ASID to decide whether user access was enabled (non-zero ASID) when the exception was taken. On return from exception, if user access was previously disabled, it re-instates TTBR0_EL1 from the per-thread saved value (updated in switch_mm() or efi_set_pgd()). Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the __uaccess_ttbr0_disable() function and asm macro to first write the reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an exception occurs between these two, the exception return code will re-instate a valid TTBR0_EL1. Similar scenario can happen in cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID update in cpu_do_switch_mm(). This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and disables the interrupts around the TTBR0_EL1 and ASID switching code in __uaccess_ttbr0_disable(). It also ensures that, when returning from the EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}. The accesses to current_thread_info()->ttbr0 are updated to use READ_ONCE/WRITE_ONCE. As a safety measure, __uaccess_ttbr0_enable() always masks out any existing non-zero ASID TTBR1_EL1 before writing in the new ASID. Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") Acked-by: NWill Deacon <will.deacon@arm.com> Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 1月, 2018 23 次提交
-
-
由 Dongjiu Geng 提交于
ARMv8.2 adds a new bit HCR_EL2.TEA which routes synchronous external aborts to EL2, and adds a trap control bit HCR_EL2.TERR which traps all Non-secure EL1&0 error record accesses to EL2. This patch enables the two bits for the guest OS, guaranteeing that KVM takes external aborts and traps attempts to access the physical error registers. ERRIDR_EL1 advertises the number of error records, we return zero meaning we can treat all the other registers as RAZ/WI too. Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> [removed specific emulation, use trap_raz_wi() directly for everything, rephrased parts of the commit message] Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
We expect to have firmware-first handling of RAS SErrors, with errors notified via an APEI method. For systems without firmware-first, add some minimal handling to KVM. There are two ways KVM can take an SError due to a guest, either may be a RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO, or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit. The current SError from EL2 code unmasks SError and tries to fence any pending SError into a single instruction window. It then leaves SError unmasked. With the v8.2 RAS Extensions we may take an SError for a 'corrected' error, but KVM is only able to handle SError from EL2 if they occur during this single instruction window... The RAS Extensions give us a new instruction to synchronise and consume SErrors. The RAS Extensions document (ARM DDI0587), '2.4.1 ESB and Unrecoverable errors' describes ESB as synchronising SError interrupts generated by 'instructions, translation table walks, hardware updates to the translation tables, and instruction fetches on the same PE'. This makes ESB equivalent to KVMs existing 'dsb, mrs-daifclr, isb' sequence. Use the alternatives to synchronise and consume any SError using ESB instead of unmasking and taking the SError. Set ARM_EXIT_WITH_SERROR_BIT in the exit_code so that we can restart the vcpu if it turns out this SError has no impact on the vcpu. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
We expect to have firmware-first handling of RAS SErrors, with errors notified via an APEI method. For systems without firmware-first, add some minimal handling to KVM. There are two ways KVM can take an SError due to a guest, either may be a RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO, or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit. For SError that interrupt a guest and are routed to EL2 the existing behaviour is to inject an impdef SError into the guest. Add code to handle RAS SError based on the ESR. For uncontained and uncategorized errors arm64_is_fatal_ras_serror() will panic(), these errors compromise the host too. All other error types are contained: For the fatal errors the vCPU can't make progress, so we inject a virtual SError. We ignore contained errors where we can make progress as if we're lucky, we may not hit them again. If only some of the CPUs support RAS the guest will see the cpufeature sanitised version of the id registers, but we may still take RAS SError on this CPU. Move the SError handling out of handle_exit() into a new handler that runs before we can be preempted. This allows us to use this_cpu_has_cap(), via arm64_is_ras_serror(). Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
When we exit a guest due to an SError the vcpu fault info isn't updated with the ESR. Today this is only done for traps. The v8.2 RAS Extensions define ISS values for SError. Update the vcpu's fault_info with the ESR on SError so that handle_exit() can determine if this was a RAS SError and decode its severity. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
If we deliver a virtual SError to the guest, the guest may defer it with an ESB instruction. The guest reads the deferred value via DISR_EL1, but the guests view of DISR_EL1 is re-mapped to VDISR_EL2 when HCR_EL2.AMO is set. Add the KVM code to save/restore VDISR_EL2, and make it accessible to userspace as DISR_EL1. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Prior to v8.2's RAS Extensions, the HCR_EL2.VSE 'virtual SError' feature generated an SError with an implementation defined ESR_EL1.ISS, because we had no mechanism to specify the ESR value. On Juno this generates an all-zero ESR, the most significant bit 'ISV' is clear indicating the remainder of the ISS field is invalid. With the RAS Extensions we have a mechanism to specify this value, and the most significant bit has a new meaning: 'IDS - Implementation Defined Syndrome'. An all-zero SError ESR now means: 'RAS error: Uncategorized' instead of 'no valid ISS'. Add KVM support for the VSESR_EL2 register to specify an ESR value when HCR_EL2.VSE generates a virtual SError. Change kvm_inject_vabt() to specify an implementation-defined value. We only need to restore the VSESR_EL2 value when HCR_EL2.VSE is set, KVM save/restores this bit during __{,de}activate_traps() and hardware clears the bit once the guest has consumed the virtual-SError. Future patches may add an API (or KVM CAP) to pend a virtual SError with a specified ESR. Cc: Dongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Non-VHE systems take an exception to EL2 in order to world-switch into the guest. When returning from the guest KVM implicitly restores the DAIF flags when it returns to the kernel at EL1. With VHE none of this exception-level jumping happens, so KVMs world-switch code is exposed to the host kernel's DAIF values, and KVM spills the guest-exit DAIF values back into the host kernel. On entry to a guest we have Debug and SError exceptions unmasked, KVM has switched VBAR but isn't prepared to handle these. On guest exit Debug exceptions are left disabled once we return to the host and will stay this way until we enter user space. Add a helper to mask/unmask DAIF around VHE guests. The unmask can only happen after the hosts VBAR value has been synchronised by the isb in __vhe_hyp_call (via kvm_call_hyp()). Masking could be as late as setting KVMs VBAR value, but is kept here for symmetry. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
KVM would like to consume any pending SError (or RAS error) after guest exit. Today it has to unmask SError and use dsb+isb to synchronise the CPU. With the RAS extensions we can use ESB to synchronise any pending SError. Add the necessary macros to allow DISR to be read and converted to an ESR. We clear the DISR register when we enable the RAS cpufeature, and the kernel has not executed any ESB instructions. Any value we find in DISR must have belonged to firmware. Executing an ESB instruction is the only way to update DISR, so we can expect firmware to have handled any deferred SError. By the same logic we clear DISR in the idle path. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
ARM v8.2 has a feature to add implicit error synchronization barriers whenever the CPU enters or returns from an exception level. Add this to the features we always enable. CPUs that don't support this feature will treat the bit as RES0. This feature causes RAS errors that are not yet visible to software to become pending SErrors. We expect to have firmware-first RAS support so synchronised RAS errors will be take immediately to EL3. Any system without firmware-first handling of errors will take the SError either immediatly after exception return, or when we unmask SError after entry.S's work. Adding IESB to the ELx flags causes it to be enabled by KVM and kexec too. Platform level RAS support may require additional firmware support. Cc: Christoffer Dall <christoffer.dall@linaro.org> Suggested-by: NWill Deacon <will.deacon@arm.com> Link: https://www.spinics.net/lists/kvm-arm/msg28192.htmlAcked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Prior to v8.2, SError is an uncontainable fatal exception. The v8.2 RAS extensions use SError to notify software about RAS errors, these can be contained by the Error Syncronization Barrier. An ACPI system with firmware-first may use SError as its 'SEI' notification. Future patches may add code to 'claim' this SError as a notification. Other systems can distinguish these RAS errors from the SError ESR and use the AET bits and additional data from RAS-Error registers to handle the error. Future patches may add this kernel-first handling. Without support for either of these we will panic(), even if we received a corrected error. Add code to decode the severity of RAS errors. We can safely ignore contained errors where the CPU can continue to make progress. For all other errors we continue to panic(). Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Xie XiuQi 提交于
ARM's v8.2 Extentions add support for Reliability, Availability and Serviceability (RAS). On CPUs with these extensions system software can use additional barriers to isolate errors and determine if faults are pending. Add cpufeature detection. Platform level RAS support may require additional firmware support. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> [Rebased added config option, reworded commit message] Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
__cpu_setup() configures SCTLR_EL1 using some hard coded hex masks, and el2_setup() duplicates some this when setting RES1 bits. Lets make this the same as KVM's hyp_init, which uses named bits. First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0} bits, and those we want to set or clear. Add a build_bug checks to ensures all bits are either set or clear. This means we don't need to preserve endian-ness configuration generated elsewhere. Finally, move the head.S and proc.S users of these hard-coded masks over to the macro versions. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
this_cpu_has_cap() tests caps->desc not caps->matches, so it stops walking the list when it finds a 'silent' feature, instead of walking to the end of the list. Prior to v4.6's 644c2ae1 ("arm64: cpufeature: Test 'matches' pointer to find the end of the list") we always tested desc to find the end of a capability list. This was changed for dubious things like PAN_NOT_UAO. v4.7's e3661b12 ("arm64: Allow a capability to be checked on single CPU") added this_cpu_has_cap() using the old desc style test. CC: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
When refactoring the sigreturn code to handle SVE, I changed the sigreturn implementation to store the new FPSIMD state from the user sigframe into task_struct before reloading the state into the CPU regs. This makes it easier to convert the data for SVE when needed. However, it turns out that the fpsimd_state structure passed into fpsimd_update_current_state is not fully initialised, so assigning the structure as a whole corrupts current->thread.fpsimd_state.cpu with uninitialised data. This means that if the garbage data written to .cpu happens to be a valid cpu number, and the task is subsequently migrated to the cpu identified by the that number, and then tries to enter userspace, the CPU FPSIMD regs will be assumed to be correct for the task and not reloaded as they should be. This can result in returning to userspace with the FPSIMD registers containing data that is stale or that belongs to another task or to the kernel. Knowingly handing around a kernel structure that is incompletely initialised with user data is a potential source of mistakes, especially across source file boundaries. To help avoid a repeat of this issue, this patch adapts the relevant internal API to hand around the user-accessible subset only: struct user_fpsimd_state. To avoid future surprises, this patch also converts all uses of struct fpsimd_state that really only access the user subset, to use struct user_fpsimd_state. A few missing consts are added to function prototypes for good measure. Thanks to Will for spotting the cause of the bug here. Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Punit Agrawal 提交于
The PUD macros (PUD_TABLE_BIT, PUD_TYPE_MASK, PUD_TYPE_SECT) use the pgdval_t even when pudval_t is available. Even though the underlying type for both (u64) is the same it is confusing and may lead to issues in the future. Fix this by using pudval_t to define the PUD_* macros. Fixes: 084bd298 ("ARM64: mm: HugeTLB support.") Fixes: 206a2a73 ("arm64: mm: Create gigabyte kernel logical mappings where possible") Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stephen Boyd 提交于
It isn't entirely obvious if we're using software PAN because we don't say anything about it in the boot log. But if we're using hardware PAN we'll print a nice CPU feature message indicating it. Add a print for software PAN too so we know if it's being used or not. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
The 'pos' argument is used to select where in TCR to write the value: the IPS or PS bitfield. Fixes: 787fd1d0 ("arm64: limit PA size to supported range") Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Commit fa2a8445 incorrectly masks the index of the HYP ID map pgd entry, causing a non-VHE kernel to hang during boot. This happens when VA_BITS=48 and the ID map text is in 52-bit physical memory. In this case we don't need an extra table level but need more entries in the top-level table, so we need to map into hyp_pgd and need to use __kvm_idmap_ptrs_per_pgd to mask in the extra bits. However, __create_hyp_mappings currently masks by PTRS_PER_PGD instead. Fix it so that we always use __kvm_idmap_ptrs_per_pgd for the HYP ID map. This ensures that we use the larger mask for the top-level ID map table when it has more entries. In all other cases, PTRS_PER_PGD is used as normal. Fixes: fa2a8445 ("arm64: allow ID map to be extended to 52 bits") Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Commit fa2a8445 added support for extending the ID map to 52 bits, but accidentally dropped a required change to __cpu_uses_extended_idmap. As a result, the kernel fails to boot when VA_BITS = 48 and the ID map text is in 52-bit physical memory, because we reduce TCR.T0SZ to cover the ID map, but then never set it back to VA_BITS. Add back the change, and also clean up some double parentheses. Fixes: fa2a8445 ("arm64: allow ID map to be extended to 52 bits") Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
Printing kernel addresses should be done in limited circumstances, mostly for debugging purposes. Printing out the virtual memory layout at every kernel bootup doesn't really fall into this category so delete the prints. There are other ways to get the same information. Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Wei Yongjun 提交于
In case of error, the function of_platform_device_create() returns NULL pointer not ERR_PTR(). The IS_ERR() test in the return value check should be replaced with NULL test. Fixes: 677a60bd ("firmware: arm_sdei: Discover SDEI support via ACPI") Acked-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
We set dsu_pmu->num_counters to -1, when the DSU is allocated but not initialised when none of the CPUs are active in the DSU. However, we use an unsigned field for num_counters. Switch this to a signed field. Fixes: 7520fa99 ("perf: ARM DynamIQ Shared Unit PMU support") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Sometimes a single capability could be listed multiple times with differing matches(), e.g, CPU errata for different MIDR versions. This breaks verify_local_cpu_feature() and this_cpu_has_cap() as we stop checking for a capability on a CPU with the first entry in the given table, which is not sufficient. Make sure we run the checks for all entries of the same capability. We do this by fixing __this_cpu_has_cap() to run through all the entries in the given table for a match and reuse it for verify_local_cpu_feature(). Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 1月, 2018 9 次提交
-
-
由 Stephen Boyd 提交于
The Kryo CPUs are also affected by the Falkor 1003 errata, so we need to do the same workaround on Kryo CPUs. The MIDR is slightly more complicated here, where the PART number is not always the same when looking at all the bits from 15 to 4. Drop the lower 8 bits and just look at the top 4 to see if it's '2' and then consider those as Kryo CPUs. This covers all the combinations without having to list them all out. Fixes: 38fd94b0 ("arm64: Work around Falkor erratum 1003") Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
Currently the early assembler page table code assumes that precisely 1xpgd, 1xpud, 1xpmd are sufficient to represent the early kernel text mappings. Unfortunately this is rarely the case when running with a 16KB granule, and we also run into limits with 4KB granule when building much larger kernels. This patch re-writes the early page table logic to compute indices of mappings for each level of page table, and if multiple indices are required, the next-level page table is scaled up accordingly. Also the required size of the swapper_pg_dir is computed at link time to cover the mapping [KIMAGE_ADDR + VOFFSET, _end]. When KASLR is enabled, an extra page is set aside for each level that may require extra entries at runtime. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
The trampoline page tables are positioned after the early page tables in the kernel linker script. As we are about to change the early page table logic to resolve the swapper size at link time as opposed to compile time, the SWAPPER_DIR_SIZE variable (currently used to locate the trampline) will be rendered unsuitable for low level assembler. This patch solves this issue by moving the trampoline before the PAN page tables. The offset to the trampoline from ttbr1 can then be expressed by: PAGE_SIZE + RESERVED_TTBR0_SIZE, which is available to the entry assembler. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
Currently one resolves the location of the reserved_ttbr0 for PAN by taking a positive offset from swapper_pg_dir. In a future patch we wish to extend the swapper s.t. its size is determined at link time rather than comile time, rendering SWAPPER_DIR_SIZE unsuitable for such a low level calculation. In this patch we re-arrange the order of the linker script s.t. instead one computes reserved_ttbr0 by subtracting RESERVED_TTBR0_SIZE from swapper_pg_dir. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
When CONFIG_UNMAP_KERNEL_AT_EL0 is set the SDEI entry point and the rest of the kernel may be unmapped when we take an event. If this may be the case, use an entry trampoline that can switch to the kernel page tables. We can't use the provided PSTATE to determine whether to switch page tables as we may have interrupted the kernel's entry trampoline, (or a normal-priority event that interrupted the kernel's entry trampoline). Instead test for a user ASID in ttbr1_el1. Save a value in regs->addr_limit to indicate whether we need to restore the original ASID when returning from this event. This value is only used by do_page_fault(), which we don't call with the SDEI regs. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI needs to calculate an offset in the trampoline page too. Move the extern char[] to sections.h. This patch just moves code around. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI defines a new ACPI table to indicate the presence of the interface. The conduit is discovered in the same way as PSCI. For ACPI we need to create the platform device ourselves as SDEI doesn't have an entry in the DSDT. The SDEI platform device should be created after ACPI has been initialised so that we can parse the table, but before GHES devices are created, which may register SDE events if they use SDEI as their notification type. Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI inherits the 'use hvc' bit that is also used by PSCI. PSCI does all its initialisation early, SDEI does its late. Remove the __init annotation from acpi_psci_use_hvc(). Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Private SDE events are per-cpu, and need to be registered and enabled on each CPU. Hide this detail from the caller by adapting our {,un}register and {en,dis}able calls to send an IPI to each CPU if the event is private. CPU private events are unregistered when the CPU is powered-off, and re-registered when the CPU is brought back online. This saves bringing secondary cores back online to call private_reset() on shutdown, kexec and resume from hibernate. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2018 3 次提交
-
-
由 James Morse 提交于
When a CPU enters an idle lower-power state or is powering off, we need to mask SDE events so that no events can be delivered while we are messing with the MMU as the registered entry points won't be valid. If the system reboots, we want to unregister all events and mask the CPUs. For kexec this allows us to hand a clean slate to the next kernel instead of relying on it to call sdei_{private,system}_data_reset(). For hibernate we unregister all events and re-register them on restore, in case we restored with the SDE code loaded at a different address. (e.g. KASLR). Add all the notifiers necessary to do this. We only support shared events so all events are left registered and enabled over CPU hotplug. Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [catalin.marinas@arm.com: added CPU_PM_ENTER_FAILED case] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
The Software Delegated Exception Interface (SDEI) is an ARM standard for registering callbacks from the platform firmware into the OS. This is typically used to implement RAS notifications. Such notifications enter the kernel at the registered entry-point with the register values of the interrupted CPU context. Because this is not a CPU exception, it cannot reuse the existing entry code. (crucially we don't implicitly know which exception level we interrupted), Add the entry point to entry.S to set us up for calling into C code. If the event interrupted code that had interrupts masked, we always return to that location. Otherwise we pretend this was an IRQ, and use SDEI's complete_and_resume call to return to vbar_el1 + offset. This allows the kernel to deliver signals to user space processes. For KVM this triggers the world switch, a quick spin round vcpu_run, then back into the guest, unless there are pending signals. Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers the panic() code-path, which doesn't invoke cpuhotplug notifiers. Because we can interrupt entry-from/exit-to another EL, we can't trust the value in sp_el0 or x29, even if we interrupted the kernel, in this case the code in entry.S will save/restore sp_el0 and use the value in __entry_task. When we have VMAP stacks we can interrupt the stack-overflow test, which stirs x0 into sp, meaning we have to have our own VMAP stacks. For now these are allocated when we probe the interface. Future patches will add refcounting hooks to allow the arch code to allocate them lazily. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Add __uaccess_{en,dis}able_hw_pan() helpers to set/clear the PSTATE.PAN bit. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-