- 25 4月, 2016 2 次提交
-
-
由 Ashok Kumar 提交于
Defined all the ARMv8 recommended implementation defined events from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers" in ARM DDI 0487A.g. Signed-off-by: NAshok Kumar <ashoks@broadcom.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ashok Kumar 提交于
changed all the common events name definition as per the document ARM DDI 0487A.g SoC specific event names follow the general naming style in the file and doesn't reflect any document. changed ARMV8_A53_PERFCTR_PREFETCH_LINEFILL to ARMV8_A53_PERFCTR_PREF_LINEFILL to match with other SoC specific event names which use _PREF_ style. corrected typo l21 to l2i. Signed-off-by: NAshok Kumar <ashoks@broadcom.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 3月, 2016 1 次提交
-
-
由 Shannon Zhao 提交于
To use the ARMv8 PMU related register defines from the KVM code, we move the relevant definitions to asm/perf_event.h header file and rename them with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 3月, 2016 1 次提交
-
-
由 Alexander Potapenko 提交于
KASAN needs to know whether the allocation happens in an IRQ handler. This lets us strip everything below the IRQ entry point to reduce the number of unique stack traces needed to be stored. Move the definition of __irq_entry to <linux/interrupt.h> so that the users don't need to pull in <linux/ftrace.h>. Also introduce the __softirq_entry macro which is similar to __irq_entry, but puts the corresponding functions to the .softirqentry.text section. Signed-off-by: NAlexander Potapenko <glider@google.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 3月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
The KASLR code incorrectly expects the contents of x18 to be preserved across a call into C code, and uses it to stash the contents of SCTLR_EL1 before enabling the MMU. If the MMU needs to be disabled again to create the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is likely to crash the system if x18 has been clobbered by kasan_early_init() or kaslr_early_init(). So use x22 instead, which is not in use so far in head.S Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 3月, 2016 2 次提交
-
-
由 Mark Rutland 提交于
Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a DSB necessary to complete I-cache maintenance in the primary boot path, and hence stale instructions may still be present in the I-cache and may be executed until the I-cache maintenance naturally completes. Since commit 8ec41987 ("arm64: mm: ensure patched kernel text is fetched from PoU"), all CPUs invalidate their I-caches after their MMU is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may have been fetched from the PoC into I-caches. We never patch text expected to be executed with the MMU off. Thus, it is unnecessary to perform broadcast I-cache maintenance in the primary boot path. This patch reduces the scope of the I-cache maintenance to the local CPU, and adds the missing DSB with similar scope, matching prior maintenance in the primary boot path. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesehvuel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The implementation of macro inv_entry refers to its 'el' argument without the required leading backslash, which results in an undefined symbol 'el' to be passed into the kernel_entry macro rather than the index of the exception level as intended. This undefined symbol strangely enough does not result in build failures, although it is visible in vmlinux: $ nm -n vmlinux |head U el 0000000000000000 A _kernel_flags_le_hi32 0000000000000000 A _kernel_offset_le_hi32 0000000000000000 A _kernel_size_le_hi32 000000000000000a A _kernel_flags_le_lo32 ..... However, it does result in incorrect code being generated for invalid exceptions taken from EL0, since the argument check in kernel_entry assumes EL1 if its argument does not equal '0'. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 3月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Functions which the compiler has instrumented for KASAN place poison on the stack shadow upon entry and remove this poison prior to returning. In the case of cpuidle, CPUs exit the kernel a number of levels deep in C code. Any instrumented functions on this critical path will leave portions of the stack shadow poisoned. If CPUs lose context and return to the kernel via a cold path, we restore a prior context saved in __cpu_suspend_enter are forgotten, and we never remove the poison they placed in the stack shadow area by functions calls between this and the actual exit of the kernel. Thus, (depending on stackframe layout) subsequent calls to instrumented functions may hit this stale poison, resulting in (spurious) KASAN splats to the console. To avoid this, clear any stale poison from the idle thread for a CPU prior to bringing a CPU online. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 3月, 2016 2 次提交
-
-
由 Adam Buchbinder 提交于
Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The prologue of the EFI entry point pushes x29 and x30 onto the stack but fails to create the stack frame correctly by omitting the assignment of x29 to the new value of the stack pointer. So fix that. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 3月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on register names, to allow manual assembly of registers unknown by the toolchain, using tables in sysreg.h. This interacts poorly with commit 42b55734 ("efi/arm64: Check for h/w support before booting a >4 KB granular kernel"), which is curretly queued via the tip tree, and uses read_cpuid without a SYS_ prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages are selected. To avoid this issue when trees are merged, move the required SYS_ prefixing into read_cpuid, and revert all of the updated callsites to pass plain register names. This effectively reverts the bulk of commit 0f54b14e. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 3月, 2016 2 次提交
-
-
由 Mark Rutland 提交于
We validate pstate using PSR_MODE32_BIT, which is part of the user-provided pstate (and cannot be trusted). Also, we conflate validation of AArch32 and AArch64 pstate values, making the code difficult to reason about. Instead, validate the pstate value based on the associated task. The task may or may not be current (e.g. when using ptrace), so this must be passed explicitly by callers. To avoid circular header dependencies via sched.h, is_compat_task is pulled out of asm/ptrace.h. To make the code possible to reason about, the AArch64 and AArch32 validation is split into separate functions. Software must respect the RES0 policy for SPSR bits, and thus the kernel mirrors the hardware policy (RAZ/WI) for bits as-yet unallocated. When these acquire an architected meaning writes may be permitted (potentially with additional validation). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Thomas Gleixner 提交于
Let the non boot cpus call into idle with the corresponding hotplug state, so the hotplug core can handle the further bringup. That's a first step to convert the boot side of the hotplugged cpus to do all the synchronization with the other side through the state machine. For now it'll only start the hotplug thread and kick the full bringup of the cpu. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: Rik van Riel <riel@redhat.com> Cc: Rafael Wysocki <rafael.j.wysocki@intel.com> Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Sebastian Siewior <bigeasy@linutronix.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 01 3月, 2016 5 次提交
-
-
由 Will Deacon 提交于
Commit 7175f059 ("arm64: perf: Enable PMCR long cycle counter bit") added initial support for a 64-bit cycle counter enabled using PMCR.LC. Unfortunately, that patch doesn't extend ARMV8_EVTYPE_MASK, so any attempts to set the enable bit are ignored by armv8pmu_pmcr_write. This patch extends the mask to include the new bit. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
With ARMv8.1 VHE, the architecture is able to (almost) transparently run the kernel at EL2, despite being written for EL1. This patch takes care of the "almost" part, mostly preventing the kernel from dropping from EL2 to EL1, and setting up the HYP configuration. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When the kernel is running in HYP (with VHE), it is necessary to include EL2 events if the user requests counting kernel or hypervisor events. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The fault decoding process (including computing the IPA in the case of a permission fault) would be much better done in C code, as we have a reasonable infrastructure to deal with the VHE/non-VHE differences. Let's move the whole thing to C, including the workaround for erratum 834220, and just patch the odd ESR_EL2 access remaining in hyp-entry.S. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the CPU has the ARMv8.1 VHE capability. This will be used to trigger kernel patching in KVM. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 26 2月, 2016 4 次提交
-
-
由 Lorenzo Pieralisi 提交于
When secondary cpus are booted through the ACPI parking protocol, the booted cpu should check that FW has correctly cleared its mailbox entry point value to make sure the boot process was correctly executed. The entry point check is carried in the cpu_ops->cpu_postboot method, that is executed by secondary cpus when entering the kernel with irqs disabled. The ACPI parking protocol cpu_ops maps/unmaps the mailboxes on the primary CPU to trigger secondary boot in the cpu_ops->cpu_boot method and on secondary processors to carry out FW checks on the booted CPU to verify the boot protocol was successfully executed in the cpu_ops->cpu_postboot method. Therefore, the cpu_ops->cpu_postboot method is forced to ioremap/unmap the mailboxes, which is wrong in that ioremap cannot be safely be carried out with irqs disabled. To fix this issue, this patch reshuffles the code so that the mailboxes are still mapped after the boot processor executes the cpu_ops->cpu_boot method for a given cpu, and the VA at which a mailbox is mapped for a given cpu is stashed in the per-cpu data struct so that secondary cpus can retrieve them in the cpu_ops->cpu_postboot and complete the required FW checks. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NItaru Kitayama <itaru.kitayama@riken.jp> Tested-by: NLoc Ho <lho@apm.com> Tested-by: NItaru Kitayama <itaru.kitayama@riken.jp> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Loc Ho <lho@apm.com> Cc: Itaru Kitayama <itaru.kitayama@riken.jp> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Al Stone <ahs3@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
ARMv8.2 extensions [1] include an optional feature, which supports half precision(16bit) floating point/asimd data processing instructions. This patch adds support for detecting and exposing the same to the userspace via HWCAPs [1] https://community.arm.com/groups/processors/blog/2016/01/05/armv8-a-architecture-evolutionSigned-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andrew Pinski 提交于
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: NAndrew Pinski <apinski@cavium.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Currently the .rodata section is actually still executable when DEBUG_RODATA is enabled. This changes that so the .rodata is actually read only, no execute. It also adds the .rodata section to the mem_init banner. Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 2月, 2016 10 次提交
-
-
由 Suzuki K Poulose 提交于
Now that we have a clear understanding of the sign of a feature, rename the routines to reflect the sign, so that it is not misused. The cpuid_feature_extract_field() now accepts a 'sign' parameter. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Use the appropriate accessor for the feature bit by keeping track of the sign of the feature Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
There is a confusion on whether the values of a feature are signed or not in ARM. This is not clearly mentioned in the ARM ARM either. We have dealt most of the bits as signed so far, and marked the rest as unsigned explicitly. This fixed in ARM ARM and will be rolled out soon. Here is the criteria in a nutshell: 1) The fields, which are either signed or unsigned, use increasing numerical values to indicate an increase in functionality. Thus, if a value of 0x1 indicates the presence of some instructions, then the 0x2 value will indicate the presence of those instructions plus some additional instructions or functionality. 2) For ID field values where the value 0x0 defines that a feature is not present, the number is an unsigned value. 3) For some features where the feature was made optional or removed after the start of the definition of the architecture, the value 0x0 is used to indicate the presence of a feature, and 0xF indicates the absence of the feature. In these cases, the fields are, in effect, holding signed values. So with these rules applied, we have only the following fields which are signed and the rest are unsigned. a) ID_AA64PFR0_EL1: {FP, ASIMD} b) ID_AA64MMFR0_EL1: {TGran4K, TGran64K} c) ID_AA64DFR0_EL1: PMUVer (0xf - PMUv3 not implemented) d) ID_DFR0_EL1: PerfMon e) ID_MMFR0_EL1: {InnerShr, OuterShr} Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Correct the feature bit entries for : ID_DFR0 ID_MMFR0 to fix the default safe value for some of the bits. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Adds a hook for checking whether a secondary CPU has the features used already by the kernel during early boot, based on the boot CPU and plugs in the check for ASID size. The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context id and is used in the early boot to make decisions. The value is picked up from the Boot CPU and cannot be delayed until other CPUs are up. If a secondary CPU has a smaller size than that of the Boot CPU, things will break horribly and the usual SANITY check is not good enough to prevent the system from crashing. So, crash the system with enough information. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
We verify the capabilities of the secondary CPUs only when hotplug is enabled. The boot time activated CPUs do not go through the verification by checking whether the system wide capabilities were initialised or not. This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU, to make sure that all the secondary CPUs go through the check. The boot time activated CPUs will still skip the system wide capability check. The plan is to hook in a check for CPU features used by the kernel at early boot up, based on the Boot CPU values. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
A secondary CPU could fail to come online due to insufficient capabilities and could simply die or loop in the kernel. e.g, a CPU with no support for the selected kernel PAGE_SIZE loops in kernel with MMU turned off. or a hotplugged CPU which doesn't have one of the advertised system capability will die during the activation. There is no way to synchronise the status of the failing CPU back to the master. This patch solves the issue by adding a field to the secondary_data which can be updated by the failing CPU. If the secondary CPU fails even before turning the MMU on, it updates the status in a special variable reserved in the head.txt section to make sure that the update can be cache invalidated safely without possible sharing of cache write back granule. Here are the possible states : -1. CPU_MMU_OFF - Initial value set by the master CPU, this value indicates that the CPU could not turn the MMU on, hence the status could not be reliably updated in the secondary_data. Instead, the CPU has updated the status @ __early_cpu_boot_status. 0. CPU_BOOT_SUCCESS - CPU has booted successfully. 1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the master CPU to synchronise by issuing a cpu_ops->cpu_kill. 2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is looping in the kernel. This information could be used by say, kexec to check if it is really safe to do a kexec reboot. 3. CPU_PANIC_KERNEL - CPU detected some serious issues which requires kernel to crash immediately. The secondary CPU cannot call panic() until it has initialised the GIC. This flag can be used to instruct the master to do so. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [catalin.marinas@arm.com: conflict resolution] [catalin.marinas@arm.com: converted "status" from int to long] [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
This patch moves cpu_die_early to smp.c, where it fits better. No functional changes, except for adding the necessary checks for CONFIG_HOTPLUG_CPU. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Or in other words, make fail_incapable_cpu() reusable. We use fail_incapable_cpu() to kill a secondary CPU early during the bringup, which doesn't have the system advertised capabilities. This patch makes the routine more generic, to kill a secondary booting CPU, getting rid of the dependency on capability struct. This can be used by checks which are not necessarily attached to a capability struct (e.g, cpu ASIDBits). In that process, renames the function to cpu_die_early() to better match its functionality. This will be moved to arch/arm64/kernel/smp.c later. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Adds a routine which can be used to park CPUs (spinning in kernel) when they can't be killed. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 2月, 2016 7 次提交
-
-
由 Ard Biesheuvel 提交于
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been provided by the bootloader, randomize the placement of RAM inside the linear region if sufficient space is available. For instance, on a 4KB granule/3 levels kernel, the linear region is 256 GB in size, and we can choose any 1 GB aligned offset that is far enough from the top of the address space to fit the distance between the start of the lowest memblock and the top of the highest memblock. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This adds support for KASLR is implemented, based on entropy provided by the bootloader in the /chosen/kaslr-seed DT property. Depending on the size of the address space (VA_BITS) and the page size, the entropy in the virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all 4 levels), with the sidenote that displacements that result in the kernel image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB granule kernels, respectively) are not allowed, and will be rounded up to an acceptable value. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is randomized independently from the core kernel. This makes it less likely that the location of core kernel data structures can be determined by an adversary, but causes all function calls from modules into the core kernel to be resolved via entries in the module PLTs. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is randomized by choosing a page aligned 128 MB region inside the interval [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of entropy (depending on page size), independently of the kernel randomization, but still guarantees that modules are within the range of relative branch and jump instructions (with the caveat that, since the module region is shared with other uses of the vmalloc area, modules may need to be loaded further away if the module region is exhausted) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This implements CONFIG_RELOCATABLE, which links the final vmlinux image with a dynamic relocation section, allowing the early boot code to perform a relocation to a different virtual address at runtime. This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Instead of using absolute addresses for both the exception location and the fixup, use offsets relative to the exception table entry values. Not only does this cut the size of the exception table in half, it is also a prerequisite for KASLR, since absolute exception table entries are subject to dynamic relocation, which is incompatible with the sorting of the exception table that occurs at build time. This patch also introduces the _ASM_EXTABLE preprocessor macro (which exists on x86 as well) and its _asm_extable assembly counterpart, as shorthands to emit exception table entries. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Before implementing KASLR for arm64 by building a self-relocating PIE executable, we have to ensure that values we use before the relocation routine is executed are not subject to dynamic relocation themselves. This applies not only to virtual addresses, but also to values that are supplied by the linker at build time and relocated using R_AARCH64_ABS64 relocations. So instead, use assemble time constants, or force the use of static relocations by folding the constants into the instructions. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Unfortunately, the current way of using the linker to emit build time constants into the Image header will no longer work once we switch to the use of PIE executables. The reason is that such constants are emitted into the binary using R_AARCH64_ABS64 relocations, which are resolved at runtime, not at build time, and the places targeted by those relocations will contain zeroes before that. So refactor the endian swapping linker script constant generation code so that it emits the upper and lower 32-bit words separately. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This adds support for emitting PLTs at module load time for relative branches that are out of range. This is a prerequisite for KASLR, which may place the kernel and the modules anywhere in the vmalloc area, making it more likely that branch target offsets exceed the maximum range of +/- 128 MB. In this version, I removed the distinction between relocations against .init executable sections and ordinary executable sections. The reason is that it is hardly worth the trouble, given that .init.text usually does not contain that many far branches, and this version now only reserves PLT entry space for jump and call relocations against undefined symbols (since symbols defined in the same module can be assumed to be within +/- 128 MB) For example, the mac80211.ko module (which is fairly sizable at ~400 KB) built with -mcmodel=large gives the following relocation counts: relocs branches unique !local .text 3925 3347 518 219 .init.text 11 8 7 1 .exit.text 4 4 4 1 .text.unlikely 81 67 36 17 ('unique' means branches to unique type/symbol/addend combos, of which !local is the subset referring to undefined symbols) IOW, we are only emitting a single PLT entry for the .init sections, and we are better off just adding it to the core PLT section instead. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 2月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
The EFI stub is typically built into the decompressor (x86, ARM) so none of its symbols are annotated as __init. However, on arm64, the stub is linked into the kernel proper, and the code is __init annotated at the section level by prepending all names of SHF_ALLOC sections with '.init'. This results in section names like .init.rodata.str1.8 (for string literals) and .init.bss (which is tiny), both of which can be moved into the .init.data output section. Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/1455712566-16727-6-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-