- 05 3月, 2018 2 次提交
-
-
由 Kees Cook 提交于
The word "feature" is repeated in the CPU features reporting. This drops it for improved readability. Before (redundant "feature" word): SMP: Total of 4 processors activated. CPU features: detected feature: 32-bit EL0 Support CPU features: detected feature: Kernel page table isolation (KPTI) CPU features: emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching CPU: All CPU(s) started at EL2 After: SMP: Total of 4 processors activated. CPU features: detected: 32-bit EL0 Support CPU features: detected: Kernel page table isolation (KPTI) CPU features: emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching CPU: All CPU(s) started at EL2 Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kees Cook 提交于
The PAN emulation notification was only happening for non-boot CPUs if CPU capabilities had already been configured. This seems to be the wrong place, as it's system-wide and isn't attached to capabilities, so its reporting didn't normally happen. Instead, report it once from the boot CPU. Before (missing PAN emulation report): SMP: Total of 4 processors activated. CPU features: detected feature: 32-bit EL0 Support CPU features: detected feature: Kernel page table isolation (KPTI) CPU: All CPU(s) started at EL2 After: SMP: Total of 4 processors activated. CPU features: detected feature: 32-bit EL0 Support CPU features: detected feature: Kernel page table isolation (KPTI) CPU features: emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching CPU: All CPU(s) started at EL2 Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 2月, 2018 1 次提交
-
-
由 Will Deacon 提交于
Our field definitions for CTR_EL0 suffer from a number of problems: - The IDC and DIC fields are missing, which causes us to enable CTR trapping on CPUs with either of these returning non-zero values. - The ERG is FTR_LOWER_SAFE, whereas it should be treated like CWG as FTR_HIGHER_SAFE so that applications can use it to avoid false sharing. - [nit] A RES1 field is described as "RAO" This patch updates the CTR_EL0 field definitions to fix these issues. Cc: <stable@vger.kernel.org> Cc: Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 2月, 2018 2 次提交
-
-
由 Marc Zyngier 提交于
Cavium ThunderX's erratum 27456 results in a corruption of icache entries that are loaded from memory that is mapped as non-global (i.e. ASID-tagged). As KPTI is based on memory being mapped non-global, let's prevent it from kicking in if this erratum is detected. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [will: Update comment] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Defaulting to global mappings for kernel space is generally good for performance and appears to be necessary for Cavium ThunderX. If we subsequently decide that we need to enable kpti, then we need to rewrite our existing page table entries to be non-global. This is fiddly, and made worse by the possible use of contiguous mappings, which require a strict break-before-make sequence. Since the enable callback runs on each online CPU from stop_machine context, we can have all CPUs enter the idmap, where secondaries can wait for the primary CPU to rewrite swapper with its MMU off. It's all fairly horrible, but at least it only runs once. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 1月, 2018 1 次提交
-
-
由 Jayachandran C 提交于
Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in unmap_kernel_at_el0(). These CPUs are not vulnerable to CVE-2017-5754 and do not need KPTI when KASLR is off. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJayachandran C <jnair@caviumnetworks.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 1月, 2018 5 次提交
-
-
由 James Morse 提交于
KVM would like to consume any pending SError (or RAS error) after guest exit. Today it has to unmask SError and use dsb+isb to synchronise the CPU. With the RAS extensions we can use ESB to synchronise any pending SError. Add the necessary macros to allow DISR to be read and converted to an ESR. We clear the DISR register when we enable the RAS cpufeature, and the kernel has not executed any ESB instructions. Any value we find in DISR must have belonged to firmware. Executing an ESB instruction is the only way to update DISR, so we can expect firmware to have handled any deferred SError. By the same logic we clear DISR in the idle path. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Xie XiuQi 提交于
ARM's v8.2 Extentions add support for Reliability, Availability and Serviceability (RAS). On CPUs with these extensions system software can use additional barriers to isolate errors and determine if faults are pending. Add cpufeature detection. Platform level RAS support may require additional firmware support. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> [Rebased added config option, reworded commit message] Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
this_cpu_has_cap() tests caps->desc not caps->matches, so it stops walking the list when it finds a 'silent' feature, instead of walking to the end of the list. Prior to v4.6's 644c2ae1 ("arm64: cpufeature: Test 'matches' pointer to find the end of the list") we always tested desc to find the end of a capability list. This was changed for dubious things like PAN_NOT_UAO. v4.7's e3661b12 ("arm64: Allow a capability to be checked on single CPU") added this_cpu_has_cap() using the old desc style test. CC: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stephen Boyd 提交于
It isn't entirely obvious if we're using software PAN because we don't say anything about it in the boot log. But if we're using hardware PAN we'll print a nice CPU feature message indicating it. Add a print for software PAN too so we know if it's being used or not. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Sometimes a single capability could be listed multiple times with differing matches(), e.g, CPU errata for different MIDR versions. This breaks verify_local_cpu_feature() and this_cpu_has_cap() as we stop checking for a capability on a CPU with the first entry in the given table, which is not sufficient. Make sure we run the checks for all entries of the same capability. We do this by fixing __this_cpu_has_cap() to run through all the entries in the given table for a match and reuse it for verify_local_cpu_feature(). Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2018 1 次提交
-
-
由 James Morse 提交于
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1 on VHE hosts, and allows future code to blindly access per-cpu variables without triggering world-switch. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 1月, 2018 3 次提交
-
-
由 Will Deacon 提交于
Aliasing attacks against CPU branch predictors can allow an attacker to redirect speculative control flow on some CPUs and potentially divulge information from one context to another. This patch adds initial skeleton code behind a new Kconfig option to enable implementation-specific mitigations against these attacks for CPUs that are affected. Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
In order to invoke the CPU capability ->matches callback from the ->enable callback for applying local-CPU workarounds, we need a handle on the capability structure. This patch passes a pointer to the capability structure to the ->enable callback. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
For non-KASLR kernels where the KPTI behaviour has not been overridden on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether or not we should unmap the kernel whilst running at EL0. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 1月, 2018 1 次提交
-
-
由 Dongjiu Geng 提交于
ARM v8.4 extensions add new neon instructions for performing a multiplication of each FP16 element of one vector with the corresponding FP16 element of a second vector, and to add or subtract this without an intermediate rounding to the corresponding FP32 element in a third vector. This patch detects this feature and let the userspace know about it via a HWCAP bit and MRS emulation. Cc: Dave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 12月, 2017 1 次提交
-
-
由 Dave Martin 提交于
Currently, the SVE field in ID_AA64PFR0_EL1 is visible unconditionally to userspace via the CPU ID register emulation, irrespective of the kernel config. This means that if a kernel configured with CONFIG_ARM64_SVE=n is run on SVE-capable hardware, userspace will see SVE reported as present in the ID regs even though the kernel forbids execution of SVE instructions. This patch makes the exposure of the SVE field in ID_AA64PFR0_EL1 conditional on CONFIG_ARM64_SVE=y. Since future architecture features are likely to encounter a similar requirement, this patch adds a suitable helper macros for use when declaring config-conditional ID register fields. Fixes: 43994d82 ("arm64/sve: Detect SVE and activate runtime support") Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reported-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 12月, 2017 1 次提交
-
-
由 Will Deacon 提交于
Allow explicit disabling of the entry trampoline on the kernel command line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0) that can be used to toggle the alternative sequences in our entry code and avoid use of the trampoline altogether if desired. This also allows us to make use of a static key in arm64_kernel_unmapped_at_el0(). Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Tested-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 11月, 2017 3 次提交
-
-
由 Dave Martin 提交于
This patch enables detection of hardware SVE support via the cpufeatures framework, and reports its presence to the kernel and userspace via the new ARM64_SVE cpucap and HWCAP_SVE hwcap respectively. Userspace can also detect SVE using ID_AA64PFR0_EL1, using the cpufeatures MRS emulation. When running on hardware that supports SVE, this enables runtime kernel support for SVE, and allows user tasks to execute SVE instructions and make of the of the SVE-specific user/kernel interface extensions implemented by this series. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
This patch uses the cpufeatures framework to determine common SVE capabilities and vector lengths, and configures the runtime SVE support code appropriately. ZCR_ELx is not really a feature register, but it is convenient to use it as a template for recording the maximum vector length supported by a CPU, using the LEN field. This field is similar to a feature field in that it is a contiguous bitfield for which we want to determine the minimum system-wide value. This patch adds ZCR as a pseudo-register in cpuinfo/cpufeatures, with appropriate custom code to populate it. Finding the minimum supported value of the LEN field is left to the cpufeatures framework in the usual way. The meaning of ID_AA64ZFR0_EL1 is not architecturally defined yet, so for now we just require it to be zero. Note that much of this code is dormant and SVE still won't be used yet, since system_supports_sve() remains hardwired to false. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
update_cpu_features() currently cannot tell whether it is being called during early or late secondary boot. This doesn't desperately matter for anything it currently does. However, SVE will need to know here whether the set of available vector lengths is known or still to be determined when booting a CPU, so that it can be updated appropriately. This patch simply moves the sys_caps_initialised stuff to the top of the file so that it can be used more widely. There doesn't seem to be a more obvious place to put it. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 10月, 2017 1 次提交
-
-
由 Julien Thierry 提交于
Software Step exception is missing after stepping a trapped instruction. Ensure SPSR.SS gets set to 0 after emulating/skipping a trapped instruction before doing ERET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> [will: replaced AARCH32_INSN_SIZE with 4] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
Now that the ARM ARM clearly specifies the rules for inferring the values of the ID register fields, fix the types of the feature bits we have in the kernel. As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the ID scheme for fields in ID registers" lists the registers to which the scheme applies along with the exceptions. This patch changes the relevant feature bits from FTR_EXACT to FTR_LOWER_SAFE to select the safer value. This will enable an older kernel running on a new CPU detect the safer option rather than completely disabling the feature. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3. Expose them to the userspace via HWCAPs and mrs emulation. SHA2-512 - Instruction support for SHA512 Hash algorithm (e.g SHA512H, SHA512H2, SHA512U0, SHA512SU1) SHA3 - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX). SM3 - Instruction support for Chinese cryptography algorithm SM3 SM4 - Instruction support for Chinese cryptography algorithm SM4 DP - Dot Product instructions (UDOT, SDOT). Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 06 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
We trap and emulate some instructions (e.g, mrs, deprecated instructions) for the userspace. However the handlers for these are registered as late_initcalls and the userspace could be up and running from the initramfs by that time (with populate_rootfs, which is a rootfs_initcall()). This could cause problems for the early applications ending up in failure like : [ 11.152061] modprobe[93]: undefined instruction: pc=0000ffff8ca48ff4 This patch promotes the specific calls to core_initcalls, which are guaranteed to be completed before we hit userspace. Cc: stable@vger.kernel.org Cc: Dave Martin <dave.martin@arm.com> Cc: Matthias Brugger <mbrugger@suse.com> Cc: James Morse <james.morse@arm.com> Reported-by: NMatwey V. Kornilov <matwey.kornilov@gmail.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 8月, 2017 2 次提交
-
-
由 Robin Murphy 提交于
Add a clean-to-point-of-persistence cache maintenance helper, and wire up the basic architectural support for the pmem driver based on it. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [catalin.marinas@arm.com: move arch_*_pmem() functions to arch/arm64/mm/flush.c] [catalin.marinas@arm.com: change dmb(sy) to dmb(osh)] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
The ARMv8.2-DCPoP feature introduces persistent memory support to the architecture, by defining a point of persistence in the memory hierarchy, and a corresponding cache maintenance operation, DC CVAP. Expose the support via HWCAP and MRS emulation. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 6月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
When debugging a kernel panic(), it can be useful to know which CPU features have been detected by the kernel, as some code paths can depend on these (and may have been patched at runtime). This patch adds a notifier to dump the detected CPU caps (as a hex string) at panic(), when we log other information useful for debugging. On a Juno R1 system running v4.12-rc5, this looks like: [ 615.431249] Kernel panic - not syncing: Fatal exception in interrupt [ 615.437609] SMP: stopping secondary CPUs [ 615.441872] Kernel Offset: disabled [ 615.445372] CPU features: 0x02086 [ 615.448522] Memory Limit: none A developer can decode this by looking at the corresponding <asm/cpucaps.h> bits. For example, the above decodes as: * bit 1: ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE * bit 2: ARM64_WORKAROUND_845719 * bit 7: ARM64_WORKAROUND_834220 * bit 13: ARM64_HAS_32BIT_EL0 Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NSteve Capper <steve.capper@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 6月, 2017 1 次提交
-
-
由 Will Deacon 提交于
Commit 3fde2999 ("arm64: cpufeature: Don't dump useless backtrace on CPU_OUT_OF_SPEC") changed the cpufeature detection code to use add_taint instead of WARN_TAINT_ONCE when detecting a heterogeneous system with mismatched feature support. Unfortunately, this resulted in all systems getting the taint, regardless of any feature mismatch. This patch fixes the problem by conditionalising the taint on detecting a feature mismatch. Acked-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 5月, 2017 1 次提交
-
-
由 Will Deacon 提交于
Unfortunately, it turns out that mismatched CPU features in big.LITTLE systems are starting to appear in the wild. Whilst we should continue to taint the kernel with CPU_OUT_OF_SPEC for features that differ in ways that we can't fix up, dumping a useless backtrace out of the cpufeature code is pointless and irritating. This patch removes the backtrace from the taint. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 5月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which must take the jump_label mutex. We call cpus_set_cap() in the secondary bringup path, from the idle thread where interrupts are disabled. Taking a mutex in this path "is a NONO" regardless of whether it's contended, and something we must avoid. We didn't spot this until recently, as ___might_sleep() won't warn for this case until all CPUs have been brought up. This patch avoids taking the mutex in the secondary bringup path. The poking of static keys is deferred until enable_cpu_capabilities(), which runs in a suitable context on the boot CPU. To account for the static keys being set later, cpus_have_const_cap() is updated to use another static key to check whether the const cap keys have been initialised, falling back to the caps bitmap until this is the case. This means that users of cpus_have_const_cap() gain should only gain a single additional NOP in the fast path once the const caps are initialised, but should always see the current cap value. The hyp code should never dereference the caps array, since the caps are initialized before we run the module initcall to initialise hyp. A check is added to the hyp init code to document this requirement. This change will sidestep a number of issues when the upcoming hotplug locking rework is merged. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMarc Zyniger <marc.zyngier@arm.com> Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Sewior <bigeasy@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 4月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
this_cpu_has_cap() only checks the feature array, and not the errata one. In order to be able to check for a CPU-local erratum, allow it to inspect the latter as well. This is consistent with cpus_have_cap()'s behaviour, which includes errata already. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 04 4月, 2017 1 次提交
-
-
由 Dave Martin 提交于
read_system_reg() can readily be confused with read_sysreg(), whereas these are really quite different in their meaning. This patches attempts to reduce the ambiguity be reserving "sysreg" for the actual system register accessors. read_system_reg() is instead renamed to read_sanitised_ftr_reg(), to make it more obvious that the Linux-defined sanitised feature register cache is being accessed here, not the underlying architectural system registers. cpufeature.c's internal __raw_read_system_reg() function is renamed in line with its actual purpose: a form of read_sysreg() that indexes on (non-compiletime-constant) encoding rather than symbolic register name. Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 3月, 2017 4 次提交
-
-
由 Suzuki K Poulose 提交于
ARMv8.3 adds new instructions to support Release Consistent processor consistent (RCpc) model, which is weaker than the RCsc model. Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
ARM v8.3 adds support for new instructions to aid floating-point multiplication and addition of complex numbers. Expose the support via HWCAP and MRS emulation Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
ARMv8.3 adds support for a new instruction to perform conversion from double precision floating point to integer to match the architected behaviour of the equivalent Javascript conversion. Expose the availability via HWCAP and MRS emulation. Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
As a recent change to ARMv8, ASID-tagged VIVT I-caches are removed retrospectively from the architecture. Consequently, we don't need to support them in Linux either. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 2月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
In emulate_mrs() we may erroneously write back to the user SP rather than XZR if we trap an MRS instruction where Xt == 31. Use the new pt_regs_write_reg() helper to handle this correctly. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Fixes: 77c97b4e ("arm64: cpufeature: Expose CPUID registers by emulation") Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 2月, 2017 2 次提交
-
-
由 Mark Rutland 提交于
We recently discovered that __raw_read_system_reg() erroneously mapped sysreg IDs to the wrong registers. To ensure that we don't get hit by a similar issue in future, this patch makes __raw_read_system_reg() use a macro for each case statement, ensuring that each case reads the correct register. To ensure that this patch hasn't introduced an issue, I've binary-diffed the object files before and after this patch. No code or data sections differ (though some debug section differ due to line numbering changing). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Since it was introduced in commit da8d02d1 ("arm64/capabilities: Make use of system wide safe value"), __raw_read_system_reg() has erroneously mapped some sysreg IDs to other registers. For the fields in ID_ISAR5_EL1, our local feature detection will be erroneous. We may spuriously detect that a feature is uniformly supported, or may fail to detect when it actually is, meaning some compat hwcaps may be erroneous (or not enforced upon hotplug). This patch corrects the erroneous entries. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Fixes: da8d02d1 ("arm64/capabilities: Make use of system wide safe value") Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com>
-