- 11 5月, 2016 1 次提交
-
-
由 Suzuki K Poulose 提交于
Remove the unnecessary smp_wmb(), which was added to make sure that the update_cpu_boot_status() completes before we mark the CPU online. But update_cpu_boot_status() already has dsb() (required for the failing CPUs) to ensure the correct behavior. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NDennis Chen <dennis.chen@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 4月, 2016 1 次提交
-
-
由 Suzuki K Poulose 提交于
maxcpu=n sets the number of CPUs activated at boot time to a max of n, but allowing the remaining CPUs to be brought up later if the user decides to do so. However, on arm64 due to various reasons, we disallowed hotplugging CPUs beyond n, by marking them not present. Now that we have checks in place to make sure the hotplugged CPUs have compatible features with system and requires no new errata, relax the restriction. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 4月, 2016 1 次提交
-
-
由 Jan Glauber 提交于
When CPUs are stopped during an abnormal operation like panic for each CPU a line is printed and the stack trace is dumped. This information is only interesting for the aborting CPU and on systems with many CPUs it only makes it harder to debug if after the aborting CPU the log is flooded with data about all other CPUs too. Therefore remove the stack dump and printk of other CPUs and only print a single line that the other CPUs are going to be stopped and, in case any CPUs remain online list them. Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 4月, 2016 2 次提交
-
-
由 Ganapatrao Kulkarni 提交于
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by: NShannon Zhao <shannon.zhao@linaro.org> Reviewed-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
With a VHE capable CPU, kernel can run at EL2 and is a decided at early boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we could have CPUs running at different exception levels, all in the same kernel! This patch adds an early check for the secondary CPUs to detect such situations. For each non-boot CPU add a sanity check to make sure we don't have different run levels w.r.t the boot CPU. We save the information on whether the boot CPU is running in hyp mode or not and ensure the remaining CPUs match it. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: made boot_cpu_hyp_mode static] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 3月, 2016 1 次提交
-
-
由 Thomas Gleixner 提交于
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 2月, 2016 2 次提交
-
-
由 Suzuki K Poulose 提交于
A secondary CPU could fail to come online due to insufficient capabilities and could simply die or loop in the kernel. e.g, a CPU with no support for the selected kernel PAGE_SIZE loops in kernel with MMU turned off. or a hotplugged CPU which doesn't have one of the advertised system capability will die during the activation. There is no way to synchronise the status of the failing CPU back to the master. This patch solves the issue by adding a field to the secondary_data which can be updated by the failing CPU. If the secondary CPU fails even before turning the MMU on, it updates the status in a special variable reserved in the head.txt section to make sure that the update can be cache invalidated safely without possible sharing of cache write back granule. Here are the possible states : -1. CPU_MMU_OFF - Initial value set by the master CPU, this value indicates that the CPU could not turn the MMU on, hence the status could not be reliably updated in the secondary_data. Instead, the CPU has updated the status @ __early_cpu_boot_status. 0. CPU_BOOT_SUCCESS - CPU has booted successfully. 1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the master CPU to synchronise by issuing a cpu_ops->cpu_kill. 2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is looping in the kernel. This information could be used by say, kexec to check if it is really safe to do a kexec reboot. 3. CPU_PANIC_KERNEL - CPU detected some serious issues which requires kernel to crash immediately. The secondary CPU cannot call panic() until it has initialised the GIC. This flag can be used to instruct the master to do so. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [catalin.marinas@arm.com: conflict resolution] [catalin.marinas@arm.com: converted "status" from int to long] [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
This patch moves cpu_die_early to smp.c, where it fits better. No functional changes, except for adding the necessary checks for CONFIG_HOTPLUG_CPU. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 2月, 2016 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
The SBBR and ACPI specifications allow ACPI based systems that do not implement PSCI (eg systems with no EL3) to boot through the ACPI parking protocol specification[1]. This patch implements the ACPI parking protocol CPU operations, and adds code that eases parsing the parking protocol data structures to the ARM64 SMP initializion carried out at the same time as cpus enumeration. To wake-up the CPUs from the parked state, this patch implements a wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the ARM one, so that a specific IPI is sent for wake-up purpose in order to distinguish it from other IPI sources. Given the current ACPI MADT parsing API, the patch implements a glue layer that helps passing MADT GICC data structure from SMP initialization code to the parking protocol implementation somewhat overriding the CPU operations interfaces. This to avoid creating a completely trasparent DT/ACPI CPU operations layer that would require creating opaque structure handling for CPUs data (DT represents CPU through DT nodes, ACPI through static MADT table entries), which seems overkill given that ACPI on ARM64 mandates only two booting protocols (PSCI and parking protocol), so there is no need for further protocol additions. Based on the original work by Mark Salter <msalter@redhat.com> [1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docxSigned-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLoc Ho <lho@apm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Al Stone <ahs3@redhat.com> [catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently open-code the removal of the idmap and restoration of the current task's MMU state in a few places. Before introducing yet more copies of this sequence, unify these to call a new helper, cpu_uninstall_idmap. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 11月, 2015 1 次提交
-
-
由 Jisheng Zhang 提交于
of_parse_and_init_cpus is only called from within smp.c, so it can be declared static. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 10月, 2015 4 次提交
-
-
由 Suzuki K. Poulose 提交于
At the moment we run through the arm64_features capability list for each CPU and set the capability if one of the CPU supports it. This could be problematic in a heterogeneous system with differing capabilities. Delay the CPU feature checks until all the enabled CPUs are up(i.e, smp_cpus_done(), so that we can make better decisions based on the overall system capability. Once we decide and advertise the capabilities the alternatives can be applied. From this state, we cannot roll back a feature to disabled based on the values from a new hotplugged CPU, due to the runtime patching and other reasons. So, for all new CPUs, we need to make sure that they have the established system capabilities. Failing which, we bring the CPU down, preventing it from turning online. Once the capabilities are decided, any new CPU booting up goes through verification to ensure that it has all the enabled capabilities and also invokes the respective enable() method on the CPU. The CPU errata checks are not delayed and is still executed per-CPU to detect the respective capabilities. If we ever come across a non-errata capability that needs to be checked on each-CPU, we could introduce them via a new capability table(or introduce a flag), which can be processed per CPU. The next patch will make the feature checks use the system wide safe value of a feature register. NOTE: The enable() methods associated with the capability is scheduled on all the CPUs (which is the only use case at the moment). If we need a different type of 'enable()' which only needs to be run once on any CPU, we should be able to handle that when needed. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: static variable and coding style fixes] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At the moment the boot CPU stores the cpuinfo long before the PERCPU areas are initialised by the kernel. This could be problematic as the non-boot CPU data structures might get copied with the data from the boot CPU, giving us no chance to detect if a particular CPU updated its cpuinfo. This patch delays the boot cpu store to smp_prepare_boot_cpu(). Also kills the setup_processor() which no longer does meaningful work. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are up, i.e, smp_cpus_done(). This is in preparation for detecting the common features across the CPUS and creating a consistent ELF HWCAP for the system. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
At early boot, we print the CPU version/revision. On a heterogeneous system, we could have different types of CPUs. Print the CPU info for all active cpus. Also, the secondary CPUs prints the message only when they turn online. Also, remove the redundant 'revision' information which doesn't make any sense without the 'variant' field. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 10月, 2015 1 次提交
-
-
由 Yang Yingliang 提交于
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 10月, 2015 2 次提交
-
-
由 Will Deacon 提交于
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 7月, 2015 1 次提交
-
-
由 Jonas Rabenstein 提交于
Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant and therfore can not be selected anymore. Remove dead #ifdef-block depending on UP_LATE_INIT in arch/arm64/kernel/setup.c Signed-off-by: NJonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> [will: kill do_post_cpus_up_work altogether] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 7月, 2015 1 次提交
-
-
由 Al Stone 提交于
For those parts of the arm64 ACPI code that need to check GICC subtables in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous BAD_MADT_ENTRY. The new macro takes into account differences in the size of the GICC subtable that the old macro did not; this caused failures even though the subtable entries are valid. Fixes: aeb823bb ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: NAl Stone <al.stone@linaro.org> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 7月, 2015 1 次提交
-
-
由 Hanjun Guo 提交于
It is normal that firmware presents GICC entry or entries (processors) with disabled flag in ACPI MADT, taking a system of 16 cpus for example, ACPI firmware may present 8 ebabled first with another 8 cpus disabled in MADT, the disabled cpus can be hot-added later. Firmware may also present more cpus than the hardware actually has, but disabled the unused ones, and easily enable it when the hardware has such cpus to make the firmware code scalable. So that's not an error for disabled cpus in MADT, we can switch pr_err() to pr_debug() to make the boot a little quieter by default. Since hwid for disabled cpus often are invalid, and we check invalid hwid first in the code, for use case that hot add cpus later will be filtered out and will not be counted in possible cups, so move this check before the hwid one to prepare the code to count for disabeld cpus when cpu hot-plug is introduced. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NAl Stone <ahs3@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 6月, 2015 1 次提交
-
-
由 Stephen Boyd 提交于
John Stultz reported an RCU splat on ARM with ipi trace events enabled. It looks like the same problem exists on ARM64. At this point in the IPI handling path we haven't called irq_enter() yet, so RCU doesn't know that we're about to exit idle and properly warns that we're using RCU from an idle CPU. Use trace_ipi_entry_rcuidle() instead of trace_ipi_entry() so that RCU is informed about our exit from idle. Cc: John Stultz <john.stultz@linaro.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: <stable@vger.kernel.org> # 3.17+ Fixes: 45ed695a ("ARM64: add IPI tracepoints") Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 5月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
cpu_kill currently returns one for success and zero for failure, which is unlike all the other cpu_operations, which return zero for success and an error code upon failure. This difference is unnecessarily confusing. Make cpu_kill consistent with the other cpu_operations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
- 21 5月, 2015 1 次提交
-
-
由 Paul E. McKenney 提交于
This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 5月, 2015 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
The code that initializes cpus on arm64 is currently split in two different code paths that carry out DT and ACPI cpus initialization. Most of the code executing SMP initialization is common and should be merged to reduce discrepancies between ACPI and DT initialization and to have code initializing cpus in a single common place in the kernel. This patch refactors arm64 SMP cpus initialization code to merge ACPI and DT boot paths in a common file and to create sanity checks that can be reused by both boot methods. Current code assumes PSCI is the only available boot method when arm64 boots with ACPI; this can be easily extended if/when the ACPI parking protocol is merged into the kernel. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
ARM64 CPU operations such as cpu_init and cpu_init_idle take a struct device_node pointer as a parameter, which corresponds to the device tree node of the logical cpu on which the operation has to be applied. With the advent of ACPI on arm64, where MADT static table entries are used to initialize cpus, the device tree node parameter in cpu_ops hooks become useless when booting with ACPI, since in that case cpu device tree nodes are not present and can not be used for cpu initialization. The current cpu_init hook requires a struct device_node pointer parameter because it is called while parsing the device tree to initialize CPUs, when the cpu_logical_map (that is used to match a cpu node reg property to a device tree node) for a given logical cpu id is not set up yet. This means that the cpu_init hook cannot rely on the of_get_cpu_node function to retrieve the device tree node corresponding to the logical cpu id passed in as parameter, so the cpu device tree node must be passed in as a parameter to fix this catch-22 dependency cycle. This patch reshuffles the cpu_logical_map initialization code so that the cpu_init cpu_ops hook can safely use the of_get_cpu_node function to retrieve the cpu device tree node, removing the need for the device tree node pointer parameter. In the process, the patch removes device tree node parameters from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus initialization consolidation. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 3月, 2015 1 次提交
-
-
由 Hanjun Guo 提交于
MADT contains the information for MPIDR which is essential for SMP initialization, parse the GIC cpu interface structures to get the MPIDR value and map it to cpu_logical_map(), and add enabled cpu with valid MPIDR into cpu_possible_map. ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and Parking protocol, but the Parking protocol is only specified for ARMv7 now, so make PSCI as the only way for the SMP boot protocol before some updates for the ACPI spec or the Parking protocol spec. Parking protocol patches for SMP boot will be sent to upstream when the new version of Parking protocol is ready. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 3月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Currently we only perform alternative patching for kernels built with CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only built for CONFIG_SMP. Thus !SMP kernels may not have necessary alternatives patched in. This patch ensures that we call apply_alternatives_all() once all CPUs are booted, even for !SMP kernels, by having the smp_init_cpus() stub call this for !SMP kernels via up_late_init. A new wrapper, do_post_cpus_up_work, is added so we can hook other calls here later (e.g. boot mode logging). Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: e039ee4e ("arm64: add alternative runtime patching") Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 3月, 2015 1 次提交
-
-
由 Rusty Russell 提交于
Thanks to spatch. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
-
- 24 1月, 2015 1 次提交
-
-
由 Jiang Liu 提交于
Commit 9a46ad6d "smp: make smp_call_function_many() use logic similar to smp_call_function_single()" has unified the way to handle single and multiple cross-CPU function calls. Now only one interrupt is needed for architecture specific code to support generic SMP function call interfaces, so kill the redundant single function call interrupt. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 12月, 2014 1 次提交
-
-
由 Andre Przywara 提交于
Currently the kernel patches all necessary instructions once at boot time, so modules are not covered by this. Change the apply_alternatives() function to take a beginning and an end pointer and introduce a new variant (apply_alternatives_all()) to cover the existing use case for the static kernel image section. Add a module_finalize() function to arm64 to check for an alternatives section in a module and patch only the instructions from that specific area. Since that module code is not touched before the module initialization has ended, we don't need to halt the machine before doing the patching in the module's code. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Andre Przywara 提交于
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 9月, 2014 1 次提交
-
-
由 Frederic Weisbecker 提交于
ARM64 irq work self-IPI support depends on __smp_cross_call to point to some relevant IRQ controller operations. This information should be available after the call to init_IRQ(). Lets implement arch_irq_work_has_interrupt() accordingly. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 08 8月, 2014 1 次提交
-
-
由 Nicolas Pitre 提交于
The strings used to list IPIs in /proc/interrupts are reused for tracing purposes. While at it, the code is slightly cleaned up so the ipi_types array indices are no longer offset by IPI_RESCHEDULE whose value is 0 anyway. Link: http://lkml.kernel.org/p/1406318733-26754-5-git-send-email-nicolas.pitre@linaro.orgAcked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 18 7月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Several kernel subsystems need to know details about CPU system register values, sometimes for CPUs other than that they are executing on. Rather than hard-coding system register accesses and cross-calls for these cases, this patch adds logic to record various system register values at boot-time. This may be used for feature reporting, firmware bug detection, etc. Separate hooks are added for the boot and hotplug paths to enable one-time intialisation and cold/warm boot value mismatch detection in later patches. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 5月, 2014 1 次提交
-
-
由 Larry Bassel 提交于
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525f Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2014 1 次提交
-
-
由 Ashwin Chaugule 提交于
PSCIv0.2 adds a new function called AFFINITY_INFO, which can be used to query if a specified CPU has actually gone offline. Calling this function via cpu_kill ensures that a CPU has quiesced after a call to cpu_die. This helps prevent the CPU from doing arbitrary bad things when data or instructions are clobbered (as happens with kexec) in the window between a CPU announcing that it is dead and said CPU leaving the kernel. Signed-off-by: NAshwin Chaugule <ashwin.chaugule@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NRob Herring <robh@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 3月, 2014 1 次提交
-
-
由 Mark Brown 提交于
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture. No locking of the topology data is done since it is only modified during CPU bringup with external serialisation from the SMP code. The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something simple and basic. Following patches will implement support for interpreting topology information from MPIDR and for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI. Signed-off-by: NMark Brown <broonie@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: removed CONFIG_CPU_TOPOLOGY, always on if SMP] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 2月, 2014 1 次提交
-
-
由 Vijaya Kumar K 提交于
processor debug state PSTATE.D is unmasked in smp call clear_os_lock for secondary cpus. So debug state is still masked in normal kernel context. With this patch, unmask debug state on secondary boot for the cpus in normal kernel context. Now kgdb tests passed with multicore. Signed-off-by: NVijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-