- 13 1月, 2018 4 次提交
-
-
由 James Morse 提交于
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1 on VHE hosts, and allows future code to blindly access per-cpu variables without triggering world-switch. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and on VHE they can be the same register. KVM calls hyp_panic() when anything unexpected happens. This may occur while a guest owns the EL1 registers. KVM stashes the vcpu pointer in tpidr_el2, which it uses to find the host context in order to restore the host EL1 registers before parachuting into the host's panic(). The host context is a struct kvm_cpu_context allocated in the per-cpu area, and mapped to hyp. Given the per-cpu offset for this CPU, this is easy to find. Change hyp_panic() to take a pointer to the struct kvm_cpu_context. Wrap these calls with an asm function that retrieves the struct kvm_cpu_context from the host's per-cpu area. Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during kvm init. (Later patches will make this unnecessary for VHE hosts) We print out the vcpu pointer as part of the panic message. Add a back reference to the 'running vcpu' in the host cpu context to preserve this. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
kvm_host_cpu_state is a per-cpu allocation made from kvm_arch_init() used to store the host EL1 registers when KVM switches to a guest. Make it easier for ASM to generate pointers into this per-cpu memory by making it a static allocation. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
KVM uses tpidr_el2 as its private vcpu register, which makes sense for non-vhe world switch as only KVM can access this register. This means vhe Linux has to use tpidr_el1, which KVM has to save/restore as part of the host context. If the SDEI handler code runs behind KVMs back, it mustn't access any per-cpu variables. To allow this on systems with vhe we need to make the host use tpidr_el2, saving KVM from save/restoring it. __guest_enter() stores the host_ctxt on the stack, do the same with the vcpu. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 1月, 2018 1 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux由 Catalin Marinas 提交于
Support for the Cluster PMU part of the ARM DynamIQ Shared Unit (DSU). * 'for-next/perf' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux: perf: ARM DynamIQ Shared Unit PMU support dt-bindings: Document devicetree binding for ARM DSU PMU arm_pmu: Use of_cpu_node_to_id helper arm64: Use of_cpu_node_to_id helper for CPU topology parsing irqchip: gic-v3: Use of_cpu_node_to_id helper coresight: of: Use of_cpu_node_to_id helper of: Add helper for mapping device node to logical CPU number perf: Export perf_event_update_userpage
-
- 09 1月, 2018 13 次提交
-
-
由 Jayachandran C 提交于
Add the older Broadcom ID as well as the new Cavium ID for ThunderX2 CPUs. Signed-off-by: NJayachandran C <jnair@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Shanker Donthineni 提交于
Falkor is susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a mitigation for these attacks, preventing any malicious entries from affecting other victim contexts. Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> [will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a PSCI-based mitigation for these CPUs when available. The call into firmware will invalidate the branch predictor state, preventing any malicious entries from affecting other victim contexts. Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they will soon need MIDR matches for hardening the branch predictor. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
For those CPUs that require PSCI to perform a BP invalidation, going all the way to the PSCI code for not much is a waste of precious cycles. Let's terminate that call as early as possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Aliasing attacks against CPU branch predictors can allow an attacker to redirect speculative control flow on some CPUs and potentially divulge information from one context to another. This patch adds initial skeleton code behind a new Kconfig option to enable implementation-specific mitigations against these attacks for CPUs that are affected. Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
We will soon need to invoke a CPU-specific function pointer after changing page tables, so move post_ttbr_update_workaround out into C code to make this possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Entry into recent versions of ARM Trusted Firmware will invalidate the CPU branch predictor state in order to protect against aliasing attacks. This patch exposes the PSCI "VERSION" function via psci_ops, so that it can be invoked outside of the PSCI driver where necessary. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
In order to invoke the CPU capability ->matches callback from the ->enable callback for applying local-CPU workarounds, we need a handle on the capability structure. This patch passes a pointer to the capability structure to the ->enable callback. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
For non-KASLR kernels where the KPTI behaviour has not been overridden on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether or not we should unmap the kernel whilst running at EL0. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's actually more useful as a mitigation against speculation attacks that can leak arbitrary kernel data to userspace through speculation. Reword the Kconfig help message to reflect this, and make the option depend on EXPERT so that it is on by default for the majority of users. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Speculation attacks against the entry trampoline can potentially resteer the speculative instruction stream through the indirect branch and into arbitrary gadgets within the kernel. This patch defends against these attacks by forcing a misprediction through the return stack: a dummy BL instruction loads an entry into the stack, so that the predicted program flow of the subsequent RET instruction is to a branch-to-self instruction which is finally resolved as a branch to the kernel vectors with speculation suppressed. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 1月, 2018 2 次提交
-
-
由 Dongjiu Geng 提交于
ARM v8.4 extensions add new neon instructions for performing a multiplication of each FP16 element of one vector with the corresponding FP16 element of a second vector, and to add or subtract this without an intermediate rounding to the corresponding FP32 element in a third vector. This patch detects this feature and let the userspace know about it via a HWCAP bit and MRS emulation. Cc: Dave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Under some uncommon timing conditions, a generation check and xchg(active_asids, A1) in check_and_switch_context() on P1 can race with an ASID roll-over on P2. If P2 has not seen the update to active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends up waiting on the spinlock since the xchg() returned 0 while P2 can go through a second ASID roll-over with (T2,A1,G2) active on P2. This roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get their generation bumped to G3: P1 P2 -- -- TTBR0.BADDR = T0 TTBR0.ASID = A0 asid_generation = G1 check_and_switch_context(T1,A1,G1) generation match check_and_switch_context(T2,A0,G0) new_context() ASID roll-over asid_generation = G2 flush_context() active_asids[P1] = 0 asid_map[A1] = 0 reserved_asids[P1] = A0,G0 xchg(active_asids, A1) active_asids[P1] = A1,G1 xchg returns 0 spin_lock_irqsave() allocated ASID (T2,A1,G2) asid_map[A1] = 1 active_asids[P2] = A1,G2 ... check_and_switch_context(T3,A0,G0) new_context() ASID roll-over asid_generation = G3 flush_context() active_asids[P1] = 0 asid_map[A1] = 1 reserved_asids[P1] = A1,G1 reserved_asids[P2] = A1,G2 allocated ASID (T3,A2,G3) asid_map[A2] = 1 active_asids[P2] = A2,G3 new_context() check_update_reserved_asid(A1,G1) matches reserved_asid[P1] reserved_asid[P1] = A1,G3 updated T1 ASID to (T1,A1,G3) check_and_switch_context(T2,A1,G2) new_context() check_and_switch_context(A1,G2) matches reserved_asids[P2] reserved_asids[P2] = A1,G3 updated T2 ASID to (T2,A1,G3) At this point, we have two tasks, T1 and T2 both using ASID A1 with the latest generation G3. Any of them is allowed to be scheduled on the other CPU leading to two different tasks with the same ASID on the same CPU. This patch changes the xchg to cmpxchg so that the active_asids is only updated if non-zero to avoid a race with an ASID roll-over on a different CPU. The ASID allocation algorithm has been formally verified using the TLA+ model checker (see https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla for the spec). Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 1月, 2018 8 次提交
-
-
由 Suzuki K Poulose 提交于
Add support for the Cluster PMU part of the ARM DynamIQ Shared Unit (DSU). The DSU integrates one or more cores with an L3 memory system, control logic, and external interfaces to form a multicore cluster. The PMU allows counting the various events related to L3, SCU etc, along with providing a cycle counter. The PMU can be accessed via system registers, which are common to the cores in the same cluster. The PMU registers follow the semantics of the ARMv8 PMU, mostly, with the exception that the counters record the cluster wide events. This driver is mostly based on the ARMv8 and CCI PMU drivers. The driver only supports ARM64 at the moment. It can be extended to support ARM32 by providing register accessors like we do in arch/arm64/include/arm_dsu_pmu.h. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
This patch documents the devicetree bindings for ARM DSU PMU. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: devicetree@vger.kernel.org Cc: frowand.list@gmail.com Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Use the new generic helper, of_cpu_node_to_id(), to map a a phandle to the logical CPU number while parsing the PMU irq affinity. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Make use of the new generic helper to convert an of_node of a CPU to the logical CPU id in parsing the topology. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Use the new generic helper of_cpu_node_to_id() instead of using our own version to map a device node to logical CPU number. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Reuse the new generic helper, of_cpu_node_to_id() to map a given CPU phandle to a logical CPU number. Acked-by: NMathieu Poirier <mathieu.poirier@linaro.org> Tested-by: NLeo Yan <leo.yan@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Add a helper to map a device node to a logical CPU number to avoid duplication. Currently this is open coded in different places (e.g gic-v3, coresight). The helper tries to map device node to a "possible" logical CPU id, which may not be online yet. It is the responsibility of the user to make sure that the CPU is online. The helper uses of_cpu_device_node_get() to retrieve the device node for a given CPU (which uses per_cpu data if available else falls back to slower of_get_cpu_node()). Cc: devicetree@vger.kernel.org Cc: Frank Rowand <frowand.list@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NRob Herring <robh@kernel.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Export perf_event_update_userpage() so that PMU driver using them, can be built as modules. Acked-by: NPeter Zilstra <peterz@infradead.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 1月, 2018 3 次提交
-
-
由 Jason A. Donenfeld 提交于
This is entirely cosmetic, but somehow it was missed when sending differing versions of this patch. This just makes the file a bit more uniform. Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Prashanth Prakash 提交于
CPU_PM_CPU_IDLE_ENTER_RETENTION skips calling cpu_pm_enter() and cpu_pm_exit(). By not calling cpu_pm functions in idle entry/exit paths we can reduce the latency involved in entering and exiting the low power idle state. On ARM64 based Qualcomm server platform we measured below overhead for calling cpu_pm_enter and cpu_pm_exit for retention states. workload: stress --hdd #CPUs --hdd-bytes 32M -t 30 Average overhead of cpu_pm_enter - 1.2us Average overhead of cpu_pm_exit - 3.1us Acked-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Prashanth Prakash 提交于
If a CPU is entering a low power idle state where it doesn't lose any context, then there is no need to call cpu_pm_enter()/cpu_pm_exit(). Add a new macro(CPU_PM_CPU_IDLE_ENTER_RETENTION) to be used by cpuidle drivers when they are entering retention state. By not calling cpu_pm_enter and cpu_pm_exit we reduce the latency involved in entering and exiting the retention idle states. CPU_PM_CPU_IDLE_ENTER_RETENTION assumes that no state is lost and hence CPU PM notifiers will not be called. We may need a broader change if we need to support partial retention states effeciently. On ARM64 based Qualcomm Server Platform we measured below overhead for for calling cpu_pm_enter and cpu_pm_exit for retention states. workload: stress --hdd #CPUs --hdd-bytes 32M -t 30 Average overhead of cpu_pm_enter - 1.2us Average overhead of cpu_pm_exit - 3.1us Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NPrashanth Prakash <pprakash@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 12月, 2017 9 次提交
-
-
由 Catalin Marinas 提交于
* for-next/52-bit-pa: arm64: enable 52-bit physical address support arm64: allow ID map to be extended to 52 bits arm64: handle 52-bit physical addresses in page table entries arm64: don't open code page table entry creation arm64: head.S: handle 52-bit PAs in PTEs in early page table setup arm64: handle 52-bit addresses in TTBR arm64: limit PA size to supported range arm64: add kconfig symbol to configure physical address size
-
由 Kristina Martsenko 提交于
Now that 52-bit physical address support is in place, add the kconfig symbol to enable it. As described in ARMv8.2, the larger addresses are only supported with the 64k granule. Also ensure that PAN is configured (or TTBR0 PAN is not), as explained in an earlier patch in this series. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Currently, when using VA_BITS < 48, if the ID map text happens to be placed in physical memory above VA_BITS, we increase the VA size (up to 48) and create a new table level, in order to map in the ID map text. This is okay because the system always supports 48 bits of VA. This patch extends the code such that if the system supports 52 bits of VA, and the ID map text is placed that high up, then we increase the VA size accordingly, up to 52. One difference from the current implementation is that so far the condition of VA_BITS < 48 has meant that the top level table is always "full", with the maximum number of entries, and an extra table level is always needed. Now, when VA_BITS = 48 (and using 64k pages), the top level table is not full, and we simply need to increase the number of entries in it, instead of creating a new table level. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()] [catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
The top 4 bits of a 52-bit physical address are positioned at bits 12..15 of a page table entry. Introduce macros to convert between a physical address and its placement in a table entry, and change all macros/functions that access PTEs to use them. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: some long lines wrapped] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Instead of open coding the generation of page table entries, use the macros/functions that exist for this - pfn_p*d and p*d_populate. Most code in the kernel already uses these macros, this patch tries to fix up the few places that don't. This is useful for the next patch in this series, which needs to change the page table entry logic, and it's better to have that logic in one place. The KVM extended ID map is special, since we're creating a level above CONFIG_PGTABLE_LEVELS and the required function isn't available. Leave it as is and add a comment to explain it. (The normal kernel ID map code doesn't need this change because its page tables are created in assembly (__create_page_tables)). Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
The top 4 bits of a 52-bit physical address are positioned at bits 12..15 in page table entries. Introduce a macro to move the bits there, and change the early ID map and swapper table setup code to use it. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: additional comments for clarification] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
The top 4 bits of a 52-bit physical address are positioned at bits 2..5 in the TTBR registers. Introduce a couple of macros to move the bits there, and change all TTBR writers to use them. Leave TTBR0 PAN code unchanged, to avoid complicating it. A system with 52-bit PA will have PAN anyway (because it's ARMv8.1 or later), and a system without 52-bit PA can only use up to 48-bit PAs. A later patch in this series will add a kconfig dependency to ensure PAN is configured. In addition, when using 52-bit PA there is a special alignment requirement on the top-level table. We don't currently have any VA_BITS configuration that would violate the requirement, but one could be added in the future, so add a compile-time BUG_ON to check for it. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: added TTBR_BADD_MASK_52 comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
We currently copy the physical address size from ID_AA64MMFR0_EL1.PARange directly into TCR.(I)PS. This will not work for 4k and 16k granule kernels on systems that support 52-bit physical addresses, since 52-bit addresses are only permitted with the 64k granule. To fix this, fall back to 48 bits when configuring the PA size when the kernel does not support 52-bit PAs. When it does, fall back to 52, to avoid similar problems in the future if the PA size is ever increased above 52. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: tcr_set_pa_size macro renamed to tcr_compute_pa_size] [catalin.marinas@arm.com: comments added to tcr_compute_pa_size] [catalin.marinas@arm.com: definitions added for TCR_*PS_SHIFT] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
ARMv8.2 introduces support for 52-bit physical addresses. To prepare for supporting this, add a new kconfig symbol to configure the physical address space size. The symbols will be used in subsequent patches. Currently the only choice is 48, a later patch will add the option of 52 once the required code is in place. Tested-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Tested-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: folded minor patches into this one] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-