- 07 12月, 2013 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The refactoring of el2_setup split code setting up EL2 and detecting the CPU boot mode in separate chunks. This allows the code that sets up EL2 to run in an endian independent way - ie before the endianess is set up in the respective sctlr registers. This patch brings secondary_entry up-to-date so that CPUs entering the kernel through this code path set-up EL2 and the cpu boot mode properly. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NMark Rutland <mark.rutand@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 11月, 2013 2 次提交
-
-
由 Matthew Leach 提交于
The current breakpoint instruction checking code for A32 is not endian clean. Fix this with appropriate byte-swapping when retrieving instructions. Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
On a BE system the wrong half of the X registers is retrieved/written when attempting to get/set the value of aarch32 registers through ptrace. Ensure that types are the correct width so that the relevant casting occurs. Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 11月, 2013 2 次提交
-
-
由 Catalin Marinas 提交于
The asynchronous aborts are generally fatal for the kernel but they can be masked via the pstate A bit. If a system error happens while in kernel mode, it won't be visible until returning to user space. This patch enables this kind of abort early to help identifying the cause. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit f27dde8d (sched: Add NEED_RESCHED to the preempt_count) introduced the use of bit 31 in preempt_count for obscure scheduling purposes. This causes interrupts taken from EL0 to hit the (open coded) BUG when this flag is flipped while handling the interrupt (we compare the values before and after, and kill the kernel if they are different). The fix is to stop messing with the preempt count entirely, as this is already being dealt with in the generic code (irq_enter/irq_exit). Tested on a dual A53 FPGA running cyclictest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 11月, 2013 1 次提交
-
-
由 Jianguo Wu 提交于
Use more appropriate NUMA_NO_NODE instead of -1 in all archs' module_alloc() Signed-off-by: NJianguo Wu <wujianguo@huawei.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 11月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 11月, 2013 1 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
The non-IPI interrupts are displayed only for the online cpus from show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts(). As a result, the column headers and the IPI count don't match if any CPU is offline. This patch fixes show_ipi_list to display IPIs for online CPUs only. Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 11月, 2013 1 次提交
-
-
由 T.J. Purtell 提交于
The ARM architecture reference specifies that the IT state bits in the PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM function is registered as a signal handler, and that signal is delivered inside a block of instructions following an IT instruction, some of the instructions at the beginning of the signal handler may be skipped if the IT state bits of the Program Status Register are not cleared by the kernel. Signed-off-by: NT.J. Purtell <tj@mobisocial.us> [catalin.marinas@arm.com: code comment and commit log updated] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 11月, 2013 4 次提交
-
-
由 Will Deacon 提交于
Relocations that require an instruction immediate to be re-encoded must ensure that the instruction pattern is represented in a little-endian format for the manipulation code to work correctly. This patch converts the loaded instruction into native-endianess prior to encoding and then converts back to little-endian byteorder before updating memory. Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
preempt_count is defined as an int. Oddly enough, we access it as a 64bit value. Things become interesting when running a BE kernel, and looking at the current CPU number, which is stored as an int next to preempt_count. Like in a per-cpu interrupt handler, for example... Using a 32bit access fixes the issue for good. Cc: Matthew Leach <matthew.leach@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU is marked online) moved the enabling of the GIC after the CPUs are marked online. This has some interesting effect: [...] [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160 [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40 [<ffffffc0000c8728>] resched_task+0x84/0xc0 [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98 [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4 [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70 [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4 [<ffffffc0000cae6c>] default_wake_function+0x10/0x18 [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0 [<ffffffc0000c7784>] complete+0x48/0x64 [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110 [...] Here, we end-up calling gic_raise_softirq without having initialized the interrupt controller for this CPU. While this goes unnoticed with GICv2 (the distributor is always accessible), it explodes with GICv3. The fix is to move the call to notify_cpu_starting before we set the secondary CPU online. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Salter 提交于
The .data section in the arm64 linker script currently lacks a definition for page-aligned data. This leads to a .page_aligned section being placed between the end of data and start of bss. This patch corrects that by using the generic RW_DATA_SECTION macro which includes support for page-aligned data. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 11月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
Commit e8765b26 (arm64: read enable-method for CPU0) introduced checks for the enable method on CPU0 (to be later used with CPU suspend). However, if the kernel is compiled for UP and a DT file is used with a method like 'spin-table', Linux complains about 'invalid enable method'. This patch turns it into an 'unsupported enable method' warning. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 10月, 2013 1 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
Once the cpu_logical_map for any logical cpu is populated with the corresponding physical identifier(i.e. mpidr), it's device node can be retrieved using the DT helper 'of_get_cpu_node'. Currently the device tree parsing code to get boot cpu node is duplicated in 'cpu_read_bootcpu_ops'. This patch replaces the code parsing the device tree for the boot cpu with of_get_cpu_node. Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 10月, 2013 1 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
OF/DT core library provides architecture specific hook to match the logical cpu index with the corresponding physical identifier. On ARM64, the MPIDR_EL1 contains specific bitfields(MPIDR_EL1.Aff{3..0}) which uniquely identify a CPU, in addition to some non-identifying information and reserved bits. The ARM cpu binding defines the 'reg' property to only contain the affinity bits, and any cpu nodes with other bits set in their 'reg' entry are skipped. This patch overrides the weak definition of arch_match_cpu_phys_id with ARM64 specific version using MPIDR_EL1.Aff{3..0} as cpu physical identifiers. Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 10月, 2013 1 次提交
-
-
由 Christoph Lameter 提交于
This is the ARM part of Christoph's patchset cleaning up the various uses of __get_cpu_var across the tree. The idea is to convert __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and fewer registers are used when code is generated. [will: fixed debug ref counting checks and pcpu array accesses] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 10月, 2013 1 次提交
-
-
由 Robin Murphy 提交于
This patch updates the barrier semantics in the kuser helper functions to take advantage of the ARMv8 additions to AArch32, which are guaranteed to be available in situations where these functions will be called. Note that this slightly changes the cmpxchg functions in that they are no longer necessarily full barriers if they return 1. However, the documentation only states they include their own barriers "as needed", not that they are obligated to act as a full barrier for the caller. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> CC: Matthew Leach <matthew.leach@arm.com> CC: Dave Martin <dave.martin@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 10月, 2013 13 次提交
-
-
由 Vinayak Kale 提交于
This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number) field width is 10bits in event selection register. Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
Currently when CPUs are brought online via a spin-table, the address they should jump to is written to the cpu-release-addr in the kernel's native endianness. As the kernel may switch endianness, secondaries might read the value byte-reversed from what was intended, and they would jump to the wrong address. As the only current arm64 spin-table implementations are little-endian, stricten up the arm64 spin-table definition such that the value written to cpu-release-addr is _always_ little-endian regardless of the endianness of any CPU. If a spinning CPU is operating big-endian, it must byte-reverse the value before jumping to handle this. Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
Currently, the code for setting the __cpu_boot_mode flag is munged in with el2_setup. This makes things difficult on a BE bringup as a memory access has to have occurred before el2_setup which is the place that we'd like to set the endianess on the current EL. Create a new function for setting __cpu_boot_mode and have el2_setup return the mode the CPU. Also define a new constant in virt.h, BOOT_CPU_MODE_EL1, for readability. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
Currently the sigreturn compat code is copied to an offset in the vectors table. When using a BE kernel this data will be stored in the wrong endianess so when returning from a signal on a 32-bit BE system, arbitrary code will be executed. Instead of declaring the code inside a struct and copying that, use the assembler's .byte directives to store the code in the correct endianess regardless of platform endianess. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
The arm64 port contains wrappers for arm32 syscalls that pass 64-bit values. These wrappers concatenate the two registers to hold a 64-bit value in a single X register. On BE, however, the lower and higher words are swapped. Create a new assembler macro, regs_to_64, that when on BE systems swaps the registers in the orr instruction. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
uname -m reports the machine field from the current utsname, which should reflect the endianness of the system. This patch reports ELF_PLATFORM for the field, so that everything appears consistent from userspace. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
This patch adds support for using PSCI CPU_OFF calls for CPU hotplug. With this code it is possible to hot unplug CPUs with "psci" as their boot-method, as long as there's an appropriate cpu_off function id specified in the psci node. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may tells us something useful (i.e. how to hotplug it back on), so we must read it along with all the enable-method for all the other CPUs. Even on UP the enable-method may tell us useful information (e.g. if a core has some mechanism that might be usable for cpuidle), so we should always read it. This patch factors out the reading of the enable method, and ensures that CPU0's enable method is read regardless of whether the kernel is built with SMP support. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
For hotplug support, we're going to want a place to store operations that do more than bring CPUs online, and it makes sense to group these with our current smp_enable_ops. For cpuidle support, we'll want to group additional functions, and we may want them even for UP kernels. This patch renames smp_enable_ops to the more general cpu_operations, and pulls the definitions out of smp code such that they can be used in UP kernels. While we're at it, fix up instances of the cpu parameter to be an unsigned int, drop the init markings and rename the *_cpu functions to cpu_* to reduce future churn when cpu_operations is extended. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The functions in psci.c are only used from smp_psci.c, and smp_psci cannot function without psci.c. Additionally psci.c is built when !SMP, where it's expected that cpu_suspend may be useful. This patch unifies the two files, removing pointless duplication and paving the way for PSCI support in UP systems. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 10月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This function may be called from loadable modules, so it needs exporting. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NLoc Ho <lho@apm.com>
-
- 10 10月, 2013 4 次提交
-
-
由 Rob Herring 提交于
Convert arm64 to use the common of_flat_dt_get_machine_name function. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
-
由 Stephen Boyd 提交于
Register with the generic sched_clock framework now that it supports 64 bits. This fixes two problems with the current sched_clock support for machines using the architected timers. First off, we don't subtract the start value from subsequent sched_clock calls so we can potentially start off with sched_clock returning gigantic numbers. Second, there is no support for suspend/resume handling so problems such as discussed in 6a4dae5e (ARM: 7565/1: sched: stop sched_clock() during suspend, 2012-10-23) can happen without this patch. Finally, it allows us to move the sched_clock setup into drivers clocksource out of the arch ports. Cc: Christopher Covington <cov@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
由 Rob Herring 提交于
Create a weak version of early_init_dt_add_memory_arch which uses memblock. This will unify all architectures except ones with custom memory bank structs. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Acked-by: NGrant Likely <grant.likely@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Cc: microblaze-uclinux@itee.uq.edu.au Cc: linux@lists.openrisc.net Cc: devicetree@vger.kernel.org
-
由 Rob Herring 提交于
Convert arm64 to use new early_init_dt_scan function. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
-
- 28 9月, 2013 1 次提交
-
-
由 Jiang Liu 提交于
If context switching happens during executing fpsimd_flush_thread(), stale value in FPSIMD registers will be saved into current thread's fpsimd_state by fpsimd_thread_switch(). That may cause invalid initialization state for the new process, so disable preemption when executing fpsimd_flush_thread(). Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 9月, 2013 1 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
This patch adds support for configuring the event stream frequency and enabling it. It also adds the hwcaps as well as compat-specific definitions to the user to detect this event stream feature. Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
-
- 20 9月, 2013 2 次提交
-
-
由 Steve Capper 提交于
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit. This patch widens elf_hwcap to be 64 bit. Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
When a task crashes and we print debugging information, ensure that compat tasks show the actual AArch32 LR and SP registers rather than the AArch64 ones. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-