- 01 7月, 2016 4 次提交
-
-
由 Andre Przywara 提交于
The newly introduced dcache_by_line_op macro is used at least in one occassion at the moment to issue a "dc cvau" instruction, which is affected by ARM errata 819472, 826319, 827319 and 824069. Change the macro to allow for alternative patching in there to protect affected Cortex-A53 cores. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [catalin.marinas@arm.com: indentation fixups] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andre Przywara 提交于
The ARM errata 819472, 826319, 827319 and 824069 for affected Cortex-A53 cores demand to promote "dc cvau" instructions to "dc civac" as well. Attribute the usage of the instruction in __flush_cache_user_range to also be covered by our alternative patching efforts. For that we introduce an assembly macro which both deals with alternatives while still tagging the instructions as USER. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andre Przywara 提交于
Commit 77ee306c ("arm64: alternatives: add enable parameter to conditional asm macros") extended the alternative assembly macros. Unfortunately this does not really work as one would expect, as the enable parameter in fact correctly protects the alternative section magic, but not the actual code sequences. This results in having both the original instruction(s) _and_ the alternative ones, if enable if false. Since there is no user of this macros anyway, just revert it. This reverts commit 77ee306c. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kefeng Wang 提交于
The memblock_alloc() and memblock_alloc_base() will panic on their own if no free memory, remove pointless BUG_ON. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 6月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
As Kees Cook notes in the ARM counterpart of this patch [0]: The _etext position is defined to be the end of the kernel text code, and should not include any part of the data segments. This interferes with things that might check memory ranges and expect executable code up to _etext. In particular, Kees is referring to the HARDENED_USERCOPY patch set [1], which rejects attempts to call copy_to_user() on kernel ranges containing executable code, but does allow access to the .rodata segment. Regardless of whether one may or may not agree with the distinction, it makes sense for _etext to have the same meaning across architectures. So let's put _etext where it belongs, between .text and .rodata, and fix up existing references to use __init_begin instead, which unlike _end_rodata includes the exception and notes sections as well. The _etext references in kaslr.c are left untouched, since its references to [_stext, _etext) are meant to capture potential jump instruction targets, and so disregarding .rodata is actually an improvement here. [0] http://article.gmane.org/gmane.linux.kernel/2245084 [1] http://thread.gmane.org/gmane.linux.kernel.hardened.devel/2502Reported-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently open-code extracting the NUMA node of a memblock region, which requires an ifdef to cater for !CONFIG_NUMA builds where the memblock_region::nid field does not exist. The generic memblock_get_region_node helper is intended to cater for this. For CONFIG_HAVE_MEMBLOCK_NODE_MAP, builds this returns reg->nid, and for for !CONFIG_HAVE_MEMBLOCK_NODE_MAP builds this is a static inline that returns 0. Note that for arm64, CONFIG_HAVE_MEMBLOCK_NODE_MAP is selected iff CONFIG_NUMA is. This patch makes use of memblock_get_region_node to simplify the arm64 code. At the same time, we can move the nid variable definition into the loop, as this is the only place it is used. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 6月, 2016 5 次提交
-
-
由 Geoff Levand 提交于
To aid in debugging kexec problems or when adding new functionality to kexec add a new routine kexec_image_info() and several inline pr_debug statements. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the arm64 architecture that add support for the kexec re-boot mechanism (CONFIG_KEXEC) on arm64 platforms. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> [catalin.marinas@arm.com: removed dead code following James Morse's comments] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Commit 68234df4 ("arm64: kill flush_cache_all()") removed the global arm64 routines cpu_reset() and cpu_soft_restart() needed by the arm64 kexec and kdump support. Add back a simplified version of cpu_soft_restart() with some changes needed for kexec in the new files cpu_reset.S, and cpu_reset.h. When a CPU is reset it needs to be put into the exception level it had when it entered the kernel. Update cpu_soft_restart() to accept an argument which signals if the reset address should be entered at EL1 or EL2, and add a new hypercall HVC_SOFT_RESTART which is used for the EL2 switch. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
kernel/smp.c has a fancy counter that keeps track of the number of CPUs it marked as not-present and left in cpu_park_loop(). If there are any CPUs spinning in here, features like kexec or hibernate may release them by overwriting this memory. This problem also occurs on machines using spin-tables to release secondary cores. After commit 44dbcc93 ("arm64: Fix behavior of maxcpus=N") we bring all known cpus into the secondary holding pen, meaning this memory can't be re-used by kexec or hibernate. Add a function cpus_are_stuck_in_kernel() to determine if either of these cases have occurred. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: cherry-picked from mainline for kexec dependency] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 6月, 2016 4 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 6b14c517. The original patch and UBSAN+KASAN enabled causes Linux to fail to link with: lib/built-in.o: In function `get_signed_val': lib/ubsan.c:93: undefined reference to `__ashlti3' lib/ubsan.c:93: undefined reference to `__ashrti3' Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing instruction aborts from data aborts, but this bit is architecturally RES0 for instruction aborts, and could be allocated for an arbitrary purpose in future. Additionally, we hard-code the value in entry.S without the mnemonic, making the code difficult to understand. Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr, which we already pass to the sole use of ESR_LNX_EXEC. A new helper, is_el0_instruction_abort() is added to make the logic clear. Any instruction aborts taken from EL1 will already have been handled by bad_mode, so we need not handle that case in the helper. For consistency, the existing permission_fault helper is renamed to is_permission_fault, and the return type is changed to bool. There should be no functional changes as the return value was a boolean expression, and the result is only used in another boolean expression. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Now that we have a helper to extract the EC from an ESR_ELx value, make use of this in the arm64 KVM code for simplicity and consistency. There should be no functional changes as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Several places open-code extraction of the EC field from an ESR_ELx value, in subtly different ways. This is unfortunate duplication and variation, and the precise logic used to extract the field is a distraction. This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from an ESR_ELx value in a consistent fashion. Existing open-coded extractions in core arm64 code are moved over to the new helper. KVM code is left as-is for the moment. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHuang Shijie <shijie.huang@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 6月, 2016 7 次提交
-
-
由 Jisheng Zhang 提交于
we only initialize swiotlb when swiotlb_force is true or not all system memory is DMA-able, this trivial optimization saves us 64MB when swiotlb is not necessary. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Alexander Potapenko 提交于
Add ARCH_HAS_KCOV to ARM64 config. To avoid potential crashes, disable instrumentation of the files in arch/arm64/kvm/hyp/*. Signed-off-by: NAlexander Potapenko <glider@google.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kefeng Wang 提交于
The gcc support __SIZEOF_INT128__ and __int128 in arm64, thus, enable ARCH_SUPPORTS_INT128 to make mul_u64_u32_shr() a bit more efficient in scheduler. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently dump_mem attempts to dump memory in 64-bit chunks when reporting a failure in 64-bit code, or 32-bit chunks when reporting a failure in 32-bit code. We added code to handle these two cases separately in commit e147ae6d ("arm64: modify the dump mem for 64 bit addresses"). However, in all cases dump_mem is called, the failing context is a kernel rather than user context. Additionally dump_mem is assumed to only be used for kernel contexts, as internally it switches to KERNEL_DS, and its callers pass kernel stack bounds. This patch removes the redundant 32-bit chunk logic and associated compat parameter, largely reverting the aforementioned commit. For the call in __die(), the check of in_interrupt() is removed also, as __die() is only called in response to faults from the kernel's exception level, and thus the !user_mode(regs) check is sufficient. Were this not the case, the used of task_stack_page(tsk) to generate the stack bounds would be erroneous. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Yang Shi 提交于
The upstream commit 1771c6e1 ("x86/kasan: instrument user memory access API") added KASAN instrument to x86 user memory access API, so added such instrument to ARM64 too. Define __copy_to/from_user in C in order to add kasan_check_read/write call, rename assembly implementation to __arch_copy_to/from_user. Tested by test_kasan module. Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NYang Shi <yang.shi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
For debugging purposes, it would be nice if we could export page tables other than the swapper_pg_dir to userspace. To enable this, this patch refactors the arm64 page table dumping code such that multiple tables may be registered with the framework, and exported under debugfs. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
AArch64 is capable of 128-bit memory accesses without alignment restrictions, which makes it both possible and highly practical to slurp up a typical 20-byte IP header in just 2 loads. Implement our own version of ip_fast_checksum() to take advantage of that, resulting in considerably fewer instructions and memory accesses than the generic version. We can also get more optimal code generation for csum_fold() by defining it a slightly different way round from the generic version, so throw that into the mix too. Suggested-by: NLuke Starrett <luke.starrett@broadcom.com> Acked-by: NLuke Starrett <luke.starrett@broadcom.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 6月, 2016 1 次提交
-
-
由 Daniel Thompson 提交于
Current versions of gdb do not interoperate cleanly with kgdb on arm64 systems because gdb and kgdb do not use the same register description. This patch modifies kgdb to work with recent releases of gdb (>= 7.8.1). Compatibility with gdb (after the patch is applied) is as follows: gdb-7.6 and earlier Ok gdb-7.7 series Works if user provides custom target description gdb-7.8(.0) Works if user provides custom target description gdb-7.8.1 and later Ok When commit 44679a4f ("arm64: KGDB: Add step debugging support") was introduced it was paired with a gdb patch that made an incompatible change to the gdbserver protocol. This patch was eventually merged into the gdb sources: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=a4d9ba85ec5597a6a556afe26b712e878374b9dd The change to the protocol was mostly made to simplify big-endian support inside the kernel gdb stub. Unfortunately the gdb project released gdb-7.7.x and gdb-7.8.0 before the protocol incompatibility was identified and reversed: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=bdc144174bcb11e808b4e73089b850cf9620a7ee This leaves us in a position where kgdb still uses the no-longer-used protocol; gdb-7.8.1, which restored the original behaviour, was released on 2014-10-29. I don't believe it is possible to detect/correct the protocol incompatiblity which means the kernel must take a view about which version of the gdb remote protocol is "correct". This patch takes the view that the original/current version of the protocol is correct and that version found in gdb-7.7.x and gdb-7.8.0 is anomalous. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 6月, 2016 3 次提交
-
-
由 Will Deacon 提交于
Rather than wait until we observe the lock being free (which might never happen), we can also return from spin_unlock_wait if we observe that the lock is now held by somebody else, which implies that it was unlocked but we just missed seeing it in that state. Furthermore, in such a scenario there is no longer a need to write back the value that we loaded, since we know that there has been a lock hand-off, which is sufficient to publish any stores prior to the unlock_wait because the ARm architecture ensures that a Store-Release instruction is multi-copy atomic when observed by a Load-Acquire instruction. The litmus test is something like: AArch64 { 0:X1=x; 0:X3=y; 1:X1=y; 2:X1=y; 2:X3=x; } P0 | P1 | P2 ; MOV W0,#1 | MOV W0,#1 | LDAR W0,[X1] ; STR W0,[X1] | STLR W0,[X1] | LDR W2,[X3] ; DMB SY | | ; LDR W2,[X3] | | ; exists (0:X2=0 /\ 2:X0=1 /\ 2:X2=0) where P0 is doing spin_unlock_wait, P1 is doing spin_unlock and P2 is doing spin_lock. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Commit d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
spin_is_locked has grown two very different use-cases: (1) [The sane case] API functions may require a certain lock to be held by the caller and can therefore use spin_is_locked as part of an assert statement in order to verify that the lock is indeed held. For example, usage of assert_spin_locked. (2) [The insane case] There are two locks, where a CPU takes one of the locks and then checks whether or not the other one is held before accessing some shared state. For example, the "optimized locking" in ipc/sem.c. In the latter case, the sequence looks like: spin_lock(&sem->lock); if (!spin_is_locked(&sma->sem_perm.lock)) /* Access shared state */ and requires that the spin_is_locked check is ordered after taking the sem->lock. Unfortunately, since our spinlocks are implemented using a LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated before the STXR and consequently return a stale value. Whilst this hasn't been seen to cause issues in practice, PowerPC fixed the same issue in 51d7d520 ("powerpc: Add smp_mb() to arch_spin_is_locked()") and, although we did something similar for spin_unlock_wait in d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") that doesn't actually take care of ordering against local acquisition of a different lock. This patch adds an smp_mb() to the start of our arch_spin_is_locked and arch_spin_unlock_wait routines to ensure that the lock value is always loaded after any other locks have been taken by the current CPU. Reported-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 6月, 2016 2 次提交
-
-
由 Mark Rutland 提交于
Unlike the debug_fault_info table, we never intentionally alter the fault_info table at runtime, and all derived pointers are treated as const currently. Make the table const so that it can be placed in .rodata and protected from unintentional writes, as we do for the syscall tables. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
If the kernel is set to show unhandled signals, and a user task does not handle a SIGILL as a result of an instruction abort, we will attempt to log the offending instruction with dump_instr before killing the task. We use dump_instr to log the encoding of the offending userspace instruction. However, dump_instr is also used to dump instructions from kernel space, and internally always switches to KERNEL_DS before dumping the instruction with get_user. When both PAN and UAO are in use, reading a user instruction via get_user while in KERNEL_DS will result in a permission fault, which leads to an Oops. As we have regs corresponding to the context of the original instruction abort, we can inspect this and only flip to KERNEL_DS if the original abort was taken from the kernel, avoiding this issue. At the same time, remove the redundant (and incorrect) comments regarding the order dump_mem and dump_instr are called in. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> #4.6+ Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NVladimir Murzin <vladimir.murzin@arm.com> Tested-by: NVladimir Murzin <vladimir.murzin@arm.com> Fixes: 57f4959b ("arm64: kernel: Add support for User Access Override") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 6月, 2016 1 次提交
-
-
由 Will Deacon 提交于
Commit 66dbd6e6 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM") ensured that pte flags are updated atomically in the face of potential concurrent, hardware-assisted updates. However, Alex reports that: | This patch breaks swapping for me. | In the broken case, you'll see either systemd cpu time spike (because | it's stuck in a page fault loop) or the system hang (because the | application owning the screen is stuck in a page fault loop). It turns out that this is because the 'dirty' argument to ptep_set_access_flags is always 0 for read faults, and so we can't use it to set PTE_RDONLY. The failing sequence is: 1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte 2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF 3. A read faults due to the missing access flag 4. ptep_set_access_flags is called with dirty = 0, due to the read fault 5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!) 6. A write faults, but pte_write is true so we get stuck The solution is to check the new page table entry (as would be done by the generic, non-atomic definition of ptep_set_access_flags that just calls set_pte_at) to establish the dirty state. Cc: <stable@vger.kernel.org> # 4.3+ Fixes: 66dbd6e6 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM") Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NAlexander Graf <agraf@suse.de> Tested-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 6月, 2016 1 次提交
-
-
由 Masahiro Yamada 提交于
Tree-wide replacement was done by commit 2ef7d5f3 (ARM, ARM64: dts: drop "arm,amba-bus" in favor of "simple-bus"), but we have some new users of "arm,amba-bus" at Linux 4.7-rc1. Eliminate them now. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NHeiko Stuebner <heiko@sntech.de> Acked-by: NChanho Min <chanho.min@lge.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 03 6月, 2016 6 次提交
-
-
由 Mark Rutland 提交于
With ARM64_64K_PAGES and RANDOMIZE_TEXT_OFFSET enabled, we hit the following issue on the boot: kernel BUG at arch/arm64/mm/mmu.c:480! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0 #310 Hardware name: ARM Juno development board (r2) (DT) task: ffff000008d58a80 ti: ffff000008d30000 task.ti: ffff000008d30000 PC is at map_kernel_segment+0x44/0xb0 LR is at paging_init+0x84/0x5b0 pc : [<ffff000008c450b4>] lr : [<ffff000008c451a4>] pstate: 600002c5 Call trace: [<ffff000008c450b4>] map_kernel_segment+0x44/0xb0 [<ffff000008c451a4>] paging_init+0x84/0x5b0 [<ffff000008c42728>] setup_arch+0x198/0x534 [<ffff000008c40848>] start_kernel+0x70/0x388 [<ffff000008c401bc>] __primary_switched+0x30/0x74 Commit 7eb90f2f ("arm64: cover the .head.text section in the .text segment mapping") removed the alignment between the .head.text and .text sections, and used the _text rather than the _stext interval for mapping the .text segment. Prior to this commit _stext was always section aligned and didn't cause any issue even when RANDOMIZE_TEXT_OFFSET was enabled. Since that alignment has been removed and _text is used to map the .text segment, we need ensure _text is always page aligned when RANDOMIZE_TEXT_OFFSET is enabled. This patch adds logic to TEXT_OFFSET fuzzing to ensure that the offset is always aligned to the kernel page size. To ensure this, we rely on the PAGE_SHIFT being available via Kconfig. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: 7eb90f2f ("arm64: cover the .head.text section in the .text segment mapping") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
In some cases (e.g. the awk for CONFIG_RANDOMIZE_TEXT_OFFSET) we would like to make use of PAGE_SHIFT outside of code that can include the usual header files. Add a new CONFIG_ARM64_PAGE_SHIFT for this, likewise with ARM64_CONT_SHIFT for consistency. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
The page table dump code logs spans of entries at the same level (pgd/pud/pmd/pte) which have the same attributes. While we log the (decoded) attributes, we don't log the level, which leaves the output ambiguous and/or confusing in some cases. For example: 0xffff800800000000-0xffff800980000000 6G RW NX SHD AF BLK UXN MEM/NORMAL If using 4K pages, this may describe a span of 6 1G block entries at the PGD/PUD level, or 3072 2M block entries at the PMD level. This patch adds the page table level to each output line, removing this ambiguity. For the example above, this will produce: 0xffffffc800000000-0xffffffc980000000 6G PUD RW NX SHD AF BLK UXN MEM/NORMAL When 3 level tables are in use, and we use the asm-generic/nopud.h definitions, the dump code treats each entry in the PGD as a 1 element table at the PUD level, and logs spans as being PUDs, which can be confusing. To counteract this, the "PUD" mnemonic is replaced with "PGD" when CONFIG_PGTABLE_LEVELS <= 3. Likewise for "PMD" when CONFIG_PGTABLE_LEVELS <= 2. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Commit ab893fb9 ("arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region") logically split KIMAGE_VADDR from PAGE_OFFSET, and since commit f9040773 ("arm64: move kernel image to base of vmalloc area") the two have been distinct values. Unfortunately, neither commit updated the comment above these definitions, which now erroneously states that PAGE_OFFSET is the start of the kernel image rather than the start of the linear mapping. This patch fixes said comment, and introduces an explanation of KIMAGE_VADDR. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
If we take an exception we don't expect (e.g. SError), we report this in the bad_mode handler with pr_crit. Depending on the configured log level, we may or may not log additional information in functions called subsequently. Notably, the messages in dump_stack (including the CPU number) are printed with KERN_DEFAULT and may not appear. Some exceptions have an IMPLEMENTATION DEFINED ESR_ELx.ISS encoding, and knowing the CPU number is crucial to correctly decode them. To ensure that this is always possible, we should log the CPU number along with the ESR_ELx value, so we are not reliant on subsequent logs or additional printk configuration options. This patch logs the CPU number in bad_mode such that it is possible for a developer to decode these exceptions, provided access to sufficient documentation. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NAl Grant <Al.Grant@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ganapatrao Kulkarni 提交于
The erratum fixes the hang of ITS SYNC command by avoiding inter node io and collections/cpu mapping on thunderx dual-socket platform. This fix is only applicable for Cavium's ThunderX dual-socket platform. Reviewed-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 02 6月, 2016 1 次提交
-
-
由 Will Deacon 提交于
We're missing entries for mlock2, copy_file_range, preadv2 and pwritev2 in our compat syscall table, so hook them up. Only the last two need compat wrappers. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 6月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
This patch brings the PER_LINUX32 /proc/cpuinfo format more in line with the 32-bit ARM one by providing an additional line: model name : ARMv8 Processor rev X (v8l) Cc: <stable@vger.kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 31 5月, 2016 2 次提交
-
-
由 Marc Zyngier 提交于
The GICv3 backend of the vgic is quite barrier heavy, in order to ensure synchronization of the system registers and the memory mapped view for a potential GICv2 guest. But when the guest is using a GICv3 model, there is absolutely no need to execute all these heavy barriers, and it is actually beneficial to avoid them altogether. This patch makes the synchonization conditional, and ensures that we do not change the EL1 SRE settings if we do not need to. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Both our GIC emulations are "strict", in the sense that we either emulate a GICv2 or a GICv3, and not a GICv3 with GICv2 legacy support. But when running on a GICv3 host, we still allow the guest to tinker with the ICC_SRE_EL1 register during its time slice: it can switch SRE off, observe that it is off, and yet on the next world switch, find the SRE bit to be set again. Not very nice. An obvious solution is to always trap accesses to ICC_SRE_EL1 (by clearing ICC_SRE_EL2.Enable), and to let the handler return the programmed value on a read, or ignore the write. That way, the guest can always observe that our GICv3 is SRE==1 only. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-