- 27 7月, 2015 40 次提交
-
-
由 Will Deacon 提交于
ARMv8 CPUs do not support any of the v8.1 features, so group them together in Kconfig to make it clear that they're part of 8.1 and not relevant to older cores. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Other CPU features follow an 'ARM64_HAS_*' naming scheme, so do the same for the LSE atomics. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We implement an optimised cmpxchg_local macro, so let the kernel know. Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
If we attempt to atomic64_dec_if_positive on INT_MIN, we will underflow and incorrectly decide that the original parameter was positive. This patches fixes the broken condition code so that we handle this corner case correctly. Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We don't need duplicate cmpxchg implementations, so use cmpxchg to implement atomic{,64}_cmpxchg, like we do for xchg already. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The cost of changing a cacheline from shared to exclusive state can be significant, especially when this is triggered by an exclusive store, since it may result in having to retry the transaction. This patch makes use of prfm to prefetch cachelines for write prior to ldxr/stxr loops when using the ll/sc atomic routines. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The common (i.e. identical for ll/sc and lse) atomic macros in atomic.h are needlessley different for atomic_t and atomic64_t. This patch tidies up the definitions to make them consistent across the two atomic types and factors out common code such as the add_unless implementation based on cmpxchg. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
cmpxchg doesn't require memory barrier semantics when the value comparison fails, so make the barrier conditional on success. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We can perform the cmpxchg comparison using eor and cbnz which avoids the "cc" clobber for the ll/sc case and consequently for the LSE case where we may have to fall-back on the ll/sc code at runtime. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our cmpxchg_double primitives so that the LSE casp instruction is used instead. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our cmpxchg primitives so that the LSE cas instruction is used instead. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our xchg primitives so that the LSE swp instruction (yes, you read right!) is used instead. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our bitops functions so that LSE atomic instructions are used instead. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of our locking functions so that LSE atomic instructions are used for spinlocks and rwlocks. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of atomic_t and atomic64_t routines so that the call-site for the out-of-line ll/sc sequences is patched with an LSE atomic instruction when we detect that the CPU supports it. If binutils is not recent enough to assemble the LSE instructions, then the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
In order to patch in the new atomic instructions at runtime, we need to generate wrappers around the out-of-line exclusive load/store atomics. This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which causes our atomic functions to branch to the out-of-line ll/sc implementations. To avoid the register spill overhead of the PCS, the out-of-line functions are compiled with specific compiler flags to force out-of-line save/restore of any registers that are usually caller-saved. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Add a CPU feature for the LSE atomic instructions, so that they can be patched in at runtime when we detect that they are supported. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARM v8.1 architecture introduces new atomic instructions to the A64 instruction set for things like cmpxchg, so advertise their availability to userspace using a hwcap. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
In preparation for the Large System Extension (LSE) atomic instructions introduced by ARM v8.1, move the current exclusive load/store (LL/SC) atomics into their own header file. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
cpufeature.h makes use of DECLARE_BITMAP, which in turn relies on the BITS_TO_LONGS and DIV_ROUND_UP macros. This patch includes kernel.h in cpufeature.h to prevent all users having to do the same thing. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
STXR can fail for a number of reasons, so don't fail an rwlock trylock operation simply because the STXR reported failure. I'm not aware of any issues with the current code, but this makes it consistent with spin_trylock and also other architectures (e.g. arch/arm). Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Will Deacon 提交于
Our ticket-based spinlock structures rely on a definition of u16, so include linux/types.h explicitly to ensure the thing compiles. Found by a module build failure in -next: arch/arm64/include/asm/spinlock_types.h:27:2: error: unknown type name 'u16' arch/arm64/include/asm/spinlock_types.h:28:2: error: unknown type name 'u16' arch/arm64/include/asm/spinlock_types.h:33:13: error: expected declaration specifiers or '...' before numeric constant include/linux/spinlock_types.h:21:2: error: unknown type name 'arch_spinlock_t' arch/arm64/include/asm/spinlock.h:34:35: error: unknown type name 'arch_spinlock_t' arch/arm64/include/asm/spinlock.h:65:37: error: unknown type name 'arch_spinlock_t' Reported-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The generic slowpath WARN implementation prints a backtrace, but the report_bug() based implementation does not, opting to print the registers instead which is generally not as useful. Ideally, report_bug() should be fixed to make the behaviour more consistent, but in the meantime this patch generates a backtrace directly from the arm64 backend instead so that this functionality is not lost with the migration to report_bug(). As a side-effect, the backtrace will be outside the oops end marker, but that's hard to avoid without modifying generic code. This patch can go away if report_bug() grows the ability in the future to generate a backtrace directly or call an arch hook at the appropriate time. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
Currently, the minimal default BUG() implementation from asm- generic is used for arm64. This patch uses the BRK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG and WARN sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit offsets instead of absolute pointers. (Note that this limits the kernel size to 2GB.) Traps are registered at arch_initcall time for aarch64, but BUG has minimal real dependencies and it is desirable to be able to generate bug splats as early as possible. This patch redirects all debug exceptions caused by BRK directly to bug_handler() until the full debug exception support has been initialised. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
<asm/debug-monitors.h> relies on <asm/ptrace.h>, but doesn't declare this dependency. This becomes a problem once debug-monitors.h starts getting included all over the place to get the BRK immedates. The missing include of <asm/memory.h> (for UL()) in <asm/esr.h> is also added. The series no longer relies on this, but I spotted it during development and it may as well get fixed. No functional change. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The way the KGDB_DYN_BRK_INS_BYTEx macros are declared is more complex than it needs to be. Also, the macros are only used in one place, which is arch-specific anyway. This patch refactors the macros to simplify them, and exposes an argument so that we can have a single macro instead of 4. As a side effect, this patch also fixes some anomalous spellings of "KGDB". These changes alter the compile types of some integer constants that are harmless but trigger truncation warnings in gcc when assigning to 32-bit variables. This patch adds an explicit cast for the affected cases. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
It makes sense to keep all the architectural exception syndrome definitions in the same place. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The naming of DBG_ESR_VAL_BRK is inconsistent with the way other similar macros are named. This patch makes the naming more consistent, and appends "64" as a reminder that this ESR pattern only matches from AArch64 state. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
<asm/esr.h> has perfectly good constants for defining ESR values already. Let's use them. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
There are only 16 comment bits in a BRK instruction, which correspond to ESR bits 15:0. Bits 24:16 of the ESR are RES0, and might have weird meanings in the future. This code inserts 16 bits of comment in the ESR value instead of 20 (almost certainly a typo in the original code). Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The size of an A64 BRK instruction is the same as the size of all other A64 instructions, because all A64 instructions are the same size. BREAK_INSTR_SIZE is retained for readibility, but it should not be an independent constant from AARCH64_INSN_SIZE. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 yalin wang 提交于
A little change to patch_map() function, use set_fixmap_offset() to make code more clear. Signed-off-by: Nyalin wang <yalin.wang2010@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
When allocating memory for the kernel image, try the AllocatePages() boot service to obtain memory at the preferred offset of 'dram_base + TEXT_OFFSET', and only revert to efi_low_alloc() if that fails. This is the only way to allocate at the base of DRAM if DRAM starts at 0x0, since efi_low_alloc() refuses to allocate at 0x0. Tested-by: NHaojian Zhuang <haojian.zhuang@linaro.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
Since both CONFIG_ACPI and CONFIG_OF are enabled when booting using ACPI tables on ARM64 platforms, we get few device tree warnings which are not valid for ACPI boot. We can use of_have_populated_dt to check if the device tree is populated or not before throwing out those errors. This patch uses of_have_populated_dt to remove non legitimate device tree warning when booting using ACPI tables. Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
There are cases where we want to compile out both versions of an alternative code block, so add an enable parameter to the new conditional alternative assembly macros in the same way as alternative_insn. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K. Poulose 提交于
The system register encoding generated by sys_reg() works only for MRS/MSR(Register) operations, as we hardcode Bit20 to 1 in mrs_s/msr_s mask. This makes it unusable for generating instructions accessing registers with Op0 < 2(e.g, PSTATE.x with Op0=0). As per ARMv8 ARM, (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", C5.2, version:ARM DDI 0487A.f), the instruction encoding reserves bits [20-19] for Op0. This patch generalises the sys_reg, mrs_s and msr_s macros, so that we could use them to access any of the supported system register. Cc: James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Some uses of ALTERNATIVE() may depend on a feature that is disabled at compile time by a Kconfig option. In this case the unused alternative instructions waste space, and if the original instruction is a nop, it wastes time and space. This patch adds an optional 'config' option to ALTERNATIVE() and alternative_insn that allows the compiler to remove both the original and alternative instructions if the config option is not defined. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-