- 15 1月, 2018 6 次提交
-
-
由 Steve Capper 提交于
Currently one resolves the location of the reserved_ttbr0 for PAN by taking a positive offset from swapper_pg_dir. In a future patch we wish to extend the swapper s.t. its size is determined at link time rather than comile time, rendering SWAPPER_DIR_SIZE unsuitable for such a low level calculation. In this patch we re-arrange the order of the linker script s.t. instead one computes reserved_ttbr0 by subtracting RESERVED_TTBR0_SIZE from swapper_pg_dir. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
When CONFIG_UNMAP_KERNEL_AT_EL0 is set the SDEI entry point and the rest of the kernel may be unmapped when we take an event. If this may be the case, use an entry trampoline that can switch to the kernel page tables. We can't use the provided PSTATE to determine whether to switch page tables as we may have interrupted the kernel's entry trampoline, (or a normal-priority event that interrupted the kernel's entry trampoline). Instead test for a user ASID in ttbr1_el1. Save a value in regs->addr_limit to indicate whether we need to restore the original ASID when returning from this event. This value is only used by do_page_fault(), which we don't call with the SDEI regs. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI needs to calculate an offset in the trampoline page too. Move the extern char[] to sections.h. This patch just moves code around. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI defines a new ACPI table to indicate the presence of the interface. The conduit is discovered in the same way as PSCI. For ACPI we need to create the platform device ourselves as SDEI doesn't have an entry in the DSDT. The SDEI platform device should be created after ACPI has been initialised so that we can parse the table, but before GHES devices are created, which may register SDE events if they use SDEI as their notification type. Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
SDEI inherits the 'use hvc' bit that is also used by PSCI. PSCI does all its initialisation early, SDEI does its late. Remove the __init annotation from acpi_psci_use_hvc(). Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Private SDE events are per-cpu, and need to be registered and enabled on each CPU. Hide this detail from the caller by adapting our {,un}register and {en,dis}able calls to send an IPI to each CPU if the event is private. CPU private events are unregistered when the CPU is powered-off, and re-registered when the CPU is brought back online. This saves bringing secondary cores back online to call private_reset() on shutdown, kexec and resume from hibernate. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2018 11 次提交
-
-
由 James Morse 提交于
When a CPU enters an idle lower-power state or is powering off, we need to mask SDE events so that no events can be delivered while we are messing with the MMU as the registered entry points won't be valid. If the system reboots, we want to unregister all events and mask the CPUs. For kexec this allows us to hand a clean slate to the next kernel instead of relying on it to call sdei_{private,system}_data_reset(). For hibernate we unregister all events and re-register them on restore, in case we restored with the SDE code loaded at a different address. (e.g. KASLR). Add all the notifiers necessary to do this. We only support shared events so all events are left registered and enabled over CPU hotplug. Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [catalin.marinas@arm.com: added CPU_PM_ENTER_FAILED case] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
The Software Delegated Exception Interface (SDEI) is an ARM standard for registering callbacks from the platform firmware into the OS. This is typically used to implement RAS notifications. Such notifications enter the kernel at the registered entry-point with the register values of the interrupted CPU context. Because this is not a CPU exception, it cannot reuse the existing entry code. (crucially we don't implicitly know which exception level we interrupted), Add the entry point to entry.S to set us up for calling into C code. If the event interrupted code that had interrupts masked, we always return to that location. Otherwise we pretend this was an IRQ, and use SDEI's complete_and_resume call to return to vbar_el1 + offset. This allows the kernel to deliver signals to user space processes. For KVM this triggers the world switch, a quick spin round vcpu_run, then back into the guest, unless there are pending signals. Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers the panic() code-path, which doesn't invoke cpuhotplug notifiers. Because we can interrupt entry-from/exit-to another EL, we can't trust the value in sp_el0 or x29, even if we interrupted the kernel, in this case the code in entry.S will save/restore sp_el0 and use the value in __entry_task. When we have VMAP stacks we can interrupt the stack-overflow test, which stirs x0 into sp, meaning we have to have our own VMAP stacks. For now these are allocated when we probe the interface. Future patches will add refcounting hooks to allow the arch code to allocate them lazily. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Add __uaccess_{en,dis}able_hw_pan() helpers to set/clear the PSTATE.PAN bit. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Today the arm64 arch code allocates an extra IRQ stack per-cpu. If we also have SDEI and VMAP stacks we need two extra per-cpu VMAP stacks. Move the VMAP stack allocation out to a helper in a new header file. This avoids missing THREADINFO_GFP, or getting the all-important alignment wrong. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
The Software Delegated Exception Interface (SDEI) is an ARM standard for registering callbacks from the platform firmware into the OS. This is typically used to implement firmware notifications (such as firmware-first RAS) or promote an IRQ that has been promoted to a firmware-assisted NMI. Add the code for detecting the SDEI version and the framework for registering and unregistering events. Subsequent patches will add the arch-specific backend code and the necessary power management hooks. Only shared events are supported, power management, private events and discovery for ACPI systems will be added by later patches. Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
The Software Delegated Exception Interface (SDEI) is an ARM standard for registering callbacks from the platform firmware into the OS. This is typically used to implement RAS notifications, or from an IRQ that has been promoted to a firmware-assisted NMI. Add a new devicetree binding to describe the SDE firmware interface. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Now that a VHE host uses tpidr_el2 for the cpu offset we no longer need KVM to save/restore tpidr_el1. Move this from the 'common' code into the non-vhe code. While we're at it, on VHE we don't need to save the ELR or SPSR as kernel_entry in entry.S will have pushed these onto the kernel stack, and will restore them from there. Move these to the non-vhe code as we need them to get back to the host. Finally remove the always-copy-tpidr we hid in the stage2 setup code, cpufeature's enable callback will do this for VHE, we only need KVM to do it for non-vhe. Add the copy into kvm-init instead. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1 on VHE hosts, and allows future code to blindly access per-cpu variables without triggering world-switch. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and on VHE they can be the same register. KVM calls hyp_panic() when anything unexpected happens. This may occur while a guest owns the EL1 registers. KVM stashes the vcpu pointer in tpidr_el2, which it uses to find the host context in order to restore the host EL1 registers before parachuting into the host's panic(). The host context is a struct kvm_cpu_context allocated in the per-cpu area, and mapped to hyp. Given the per-cpu offset for this CPU, this is easy to find. Change hyp_panic() to take a pointer to the struct kvm_cpu_context. Wrap these calls with an asm function that retrieves the struct kvm_cpu_context from the host's per-cpu area. Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during kvm init. (Later patches will make this unnecessary for VHE hosts) We print out the vcpu pointer as part of the panic message. Add a back reference to the 'running vcpu' in the host cpu context to preserve this. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
kvm_host_cpu_state is a per-cpu allocation made from kvm_arch_init() used to store the host EL1 registers when KVM switches to a guest. Make it easier for ASM to generate pointers into this per-cpu memory by making it a static allocation. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
KVM uses tpidr_el2 as its private vcpu register, which makes sense for non-vhe world switch as only KVM can access this register. This means vhe Linux has to use tpidr_el1, which KVM has to save/restore as part of the host context. If the SDEI handler code runs behind KVMs back, it mustn't access any per-cpu variables. To allow this on systems with vhe we need to make the host use tpidr_el2, saving KVM from save/restoring it. __guest_enter() stores the host_ctxt on the stack, do the same with the vcpu. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 1月, 2018 1 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux由 Catalin Marinas 提交于
Support for the Cluster PMU part of the ARM DynamIQ Shared Unit (DSU). * 'for-next/perf' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux: perf: ARM DynamIQ Shared Unit PMU support dt-bindings: Document devicetree binding for ARM DSU PMU arm_pmu: Use of_cpu_node_to_id helper arm64: Use of_cpu_node_to_id helper for CPU topology parsing irqchip: gic-v3: Use of_cpu_node_to_id helper coresight: of: Use of_cpu_node_to_id helper of: Add helper for mapping device node to logical CPU number perf: Export perf_event_update_userpage
-
- 09 1月, 2018 13 次提交
-
-
由 Jayachandran C 提交于
Add the older Broadcom ID as well as the new Cavium ID for ThunderX2 CPUs. Signed-off-by: NJayachandran C <jnair@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Shanker Donthineni 提交于
Falkor is susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a mitigation for these attacks, preventing any malicious entries from affecting other victim contexts. Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> [will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a PSCI-based mitigation for these CPUs when available. The call into firmware will invalidate the branch predictor state, preventing any malicious entries from affecting other victim contexts. Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they will soon need MIDR matches for hardening the branch predictor. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
For those CPUs that require PSCI to perform a BP invalidation, going all the way to the PSCI code for not much is a waste of precious cycles. Let's terminate that call as early as possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Aliasing attacks against CPU branch predictors can allow an attacker to redirect speculative control flow on some CPUs and potentially divulge information from one context to another. This patch adds initial skeleton code behind a new Kconfig option to enable implementation-specific mitigations against these attacks for CPUs that are affected. Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
We will soon need to invoke a CPU-specific function pointer after changing page tables, so move post_ttbr_update_workaround out into C code to make this possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Entry into recent versions of ARM Trusted Firmware will invalidate the CPU branch predictor state in order to protect against aliasing attacks. This patch exposes the PSCI "VERSION" function via psci_ops, so that it can be invoked outside of the PSCI driver where necessary. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
In order to invoke the CPU capability ->matches callback from the ->enable callback for applying local-CPU workarounds, we need a handle on the capability structure. This patch passes a pointer to the capability structure to the ->enable callback. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
For non-KASLR kernels where the KPTI behaviour has not been overridden on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether or not we should unmap the kernel whilst running at EL0. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's actually more useful as a mitigation against speculation attacks that can leak arbitrary kernel data to userspace through speculation. Reword the Kconfig help message to reflect this, and make the option depend on EXPERT so that it is on by default for the majority of users. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Speculation attacks against the entry trampoline can potentially resteer the speculative instruction stream through the indirect branch and into arbitrary gadgets within the kernel. This patch defends against these attacks by forcing a misprediction through the return stack: a dummy BL instruction loads an entry into the stack, so that the predicted program flow of the subsequent RET instruction is to a branch-to-self instruction which is finally resolved as a branch to the kernel vectors with speculation suppressed. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 1月, 2018 2 次提交
-
-
由 Dongjiu Geng 提交于
ARM v8.4 extensions add new neon instructions for performing a multiplication of each FP16 element of one vector with the corresponding FP16 element of a second vector, and to add or subtract this without an intermediate rounding to the corresponding FP32 element in a third vector. This patch detects this feature and let the userspace know about it via a HWCAP bit and MRS emulation. Cc: Dave Martin <Dave.Martin@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Under some uncommon timing conditions, a generation check and xchg(active_asids, A1) in check_and_switch_context() on P1 can race with an ASID roll-over on P2. If P2 has not seen the update to active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends up waiting on the spinlock since the xchg() returned 0 while P2 can go through a second ASID roll-over with (T2,A1,G2) active on P2. This roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get their generation bumped to G3: P1 P2 -- -- TTBR0.BADDR = T0 TTBR0.ASID = A0 asid_generation = G1 check_and_switch_context(T1,A1,G1) generation match check_and_switch_context(T2,A0,G0) new_context() ASID roll-over asid_generation = G2 flush_context() active_asids[P1] = 0 asid_map[A1] = 0 reserved_asids[P1] = A0,G0 xchg(active_asids, A1) active_asids[P1] = A1,G1 xchg returns 0 spin_lock_irqsave() allocated ASID (T2,A1,G2) asid_map[A1] = 1 active_asids[P2] = A1,G2 ... check_and_switch_context(T3,A0,G0) new_context() ASID roll-over asid_generation = G3 flush_context() active_asids[P1] = 0 asid_map[A1] = 1 reserved_asids[P1] = A1,G1 reserved_asids[P2] = A1,G2 allocated ASID (T3,A2,G3) asid_map[A2] = 1 active_asids[P2] = A2,G3 new_context() check_update_reserved_asid(A1,G1) matches reserved_asid[P1] reserved_asid[P1] = A1,G3 updated T1 ASID to (T1,A1,G3) check_and_switch_context(T2,A1,G2) new_context() check_and_switch_context(A1,G2) matches reserved_asids[P2] reserved_asids[P2] = A1,G3 updated T2 ASID to (T2,A1,G3) At this point, we have two tasks, T1 and T2 both using ASID A1 with the latest generation G3. Any of them is allowed to be scheduled on the other CPU leading to two different tasks with the same ASID on the same CPU. This patch changes the xchg to cmpxchg so that the active_asids is only updated if non-zero to avoid a race with an ASID roll-over on a different CPU. The ASID allocation algorithm has been formally verified using the TLA+ model checker (see https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla for the spec). Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 1月, 2018 7 次提交
-
-
由 Suzuki K Poulose 提交于
Add support for the Cluster PMU part of the ARM DynamIQ Shared Unit (DSU). The DSU integrates one or more cores with an L3 memory system, control logic, and external interfaces to form a multicore cluster. The PMU allows counting the various events related to L3, SCU etc, along with providing a cycle counter. The PMU can be accessed via system registers, which are common to the cores in the same cluster. The PMU registers follow the semantics of the ARMv8 PMU, mostly, with the exception that the counters record the cluster wide events. This driver is mostly based on the ARMv8 and CCI PMU drivers. The driver only supports ARM64 at the moment. It can be extended to support ARM32 by providing register accessors like we do in arch/arm64/include/arm_dsu_pmu.h. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
This patch documents the devicetree bindings for ARM DSU PMU. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: devicetree@vger.kernel.org Cc: frowand.list@gmail.com Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Use the new generic helper, of_cpu_node_to_id(), to map a a phandle to the logical CPU number while parsing the PMU irq affinity. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Make use of the new generic helper to convert an of_node of a CPU to the logical CPU id in parsing the topology. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Use the new generic helper of_cpu_node_to_id() instead of using our own version to map a device node to logical CPU number. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Reuse the new generic helper, of_cpu_node_to_id() to map a given CPU phandle to a logical CPU number. Acked-by: NMathieu Poirier <mathieu.poirier@linaro.org> Tested-by: NLeo Yan <leo.yan@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Add a helper to map a device node to a logical CPU number to avoid duplication. Currently this is open coded in different places (e.g gic-v3, coresight). The helper tries to map device node to a "possible" logical CPU id, which may not be online yet. It is the responsibility of the user to make sure that the CPU is online. The helper uses of_cpu_device_node_get() to retrieve the device node for a given CPU (which uses per_cpu data if available else falls back to slower of_get_cpu_node()). Cc: devicetree@vger.kernel.org Cc: Frank Rowand <frowand.list@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NRob Herring <robh@kernel.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-