- 03 11月, 2017 4 次提交
-
-
由 Dave Martin 提交于
Currently, a guest kernel sees the true CPU feature registers (ID_*_EL1) when it reads them using MRS instructions. This means that the guest may observe features that are present in the hardware but the host doesn't understand or doesn't provide support for. A guest may legimitately try to use such a feature as per the architecture, but use of the feature may trap instead of working normally, triggering undef injection into the guest. This is not a problem for the host, but the guest may go wrong when running on newer hardware than the host knows about. This patch hides from guest VMs any AArch64-specific CPU features that the host doesn't support, by exposing to the guest the sanitised versions of the registers computed by the cpufeatures framework, instead of the true hardware registers. To achieve this, HCR_EL2.TID3 is now set for AArch64 guests, and emulation code is added to KVM to report the sanitised versions of the affected registers in response to MRS and register reads from userspace. The affected registers are removed from invariant_sys_regs[] (since the invariant_sys_regs handling is no longer quite correct for them) and added to sys_reg_desgs[], with appropriate access(), get_user() and set_user() methods. No runtime vcpu storage is allocated for the registers: instead, they are read on demand from the cpufeatures framework. This may need modification in the future if there is a need for userspace to customise the features visible to the guest. Attempts by userspace to write the registers are handled similarly to the current invariant_sys_regs handling: writes are permitted, but only if they don't attempt to change the value. This is sufficient to support VM snapshot/restore from userspace. Because of the additional registers, restoring a VM on an older kernel may not work unless userspace knows how to handle the extra VM registers exposed to the KVM user ABI by this patch. Under the principle of least damage, this patch makes no attempt to handle any of the other registers currently in invariant_sys_regs[], or to emulate registers for AArch32: however, these could be handled in a similar way in future, as necessary. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
Currently sys_rt_sigreturn() verifies that the base sigframe is readable, but no similar check is performed on the extra data to which an extra_context record points. This matters because the extra data will be read with the unprotected user accessors. However, this is not a problem at present because the extra data base address is required to be exactly at the end of the base sigframe. So, there would need to be a non-user-readable kernel address within about 59K (SIGFRAME_MAXSZ - sizeof(struct rt_sigframe)) of some address for which access_ok(VERIFY_READ) returns true, in order for sigreturn to be able to read kernel memory that should be inaccessible to the user task. This is currently impossible due to the untranslatable address hole between the TTBR0 and TTBR1 address ranges. Disappearance of the hole between the TTBR0 and TTBR1 mapping ranges would require the VA size for TTBR0 and TTBR1 to grow to at least 55 bits, and either the disabling of tagged pointers for userspace or enabling of tagged pointers for kernel space; none of which is currently envisaged. Even so, it is wrong to use the unprotected user accessors without an accompanying access_ok() check. To avoid the potential for future surprises, this patch does an explicit access_ok() check on the extra data space when parsing an extra_context record. Fixes: 33f08261 ("arm64: signal: Allow expansion of the signal frame") Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
A couple of FPSIMD exception handling functions that are called from entry.S are currently not annotated as such. This is not a big deal since asmlinkage does nothing on arm/arm64, but fixing the annotations is more consistent and may help avoid future surprises. This patch adds appropriate asmlinkage annotations for do_fpsimd_acc() and do_fpsimd_exc(). Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
Function graph does not work currently when CONFIG_DYNAMIC_TRACE is not set. This is because ftrace_function_trace is not always set to ftrace_stub when function_graph is in use. Do not skip checking of graph tracer functions when ftrace_function_trace is set. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 11月, 2017 11 次提交
-
-
由 Xie XiuQi 提交于
Today SError is taken using the inv_entry macro that ends up in bad_mode. SError can be used by the RAS Extensions to notify either the OS or firmware of CPU problems, some of which may have been corrected. To allow this handling to be added, add a do_serror() C function that just panic()s. Add the entry.S boiler plate to save/restore the CPU registers and unmask debug exceptions. Future patches may change do_serror() to return if the SError Interrupt was notification of a corrected error. Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NWang Xiongfeng <wangxiongfengi2@huawei.com> [Split out of a bigger patch, added compat path, renamed, enabled debug exceptions] Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Following our 'dai' order, irqs should be processed with debug and serror exceptions unmasked. Add a helper to unmask these two, (and fiq for good measure). Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
el0_sync also unmasks exceptions on a case-by-case basis, debug exceptions are enabled, unless this was a debug exception. Irqs are unmasked for some exception types but not for others. el0_dbg should run with everything masked to prevent us taking a debug exception from do_debug_exception. For the other cases we can unmask everything. This changes the behaviour of fpsimd_{acc,exc} and el0_inv which previously ran with irqs masked. This patch removed the last user of enable_dbg_and_irq, remove it. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
el1_sync unmasks exceptions on a case-by-case basis, debug exceptions are unmasked, unless this was a debug exception. IRQs are unmasked for instruction and data aborts only if the interupted context had irqs unmasked. Following our 'dai' order, el1_dbg should run with everything masked. For the other cases we can inherit whatever we interrupted. Add a macro inherit_daif to set daif based on the interrupted pstate. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
enable_step_tsk is the only user of disable_dbg, which doesn't respect our 'dai' order for exception masking. enable_step_tsk may enable single-step, so previously needed to mask debug exceptions to prevent us from single-stepping kernel_exit. enable_step_tsk is called at the end of the ret_to_user loop, which has already masked all exceptions so this is no longer needed. Remove disable_dbg, add a comment that enable_step_tsk's caller should have masked debug. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
To take RAS Exceptions as quickly as possible we need to keep SError unmasked as much as possible. We need to mask it during kernel_exit as taking an error from this code will overwrite the exception-registers. Adding a naked 'disable_daif' to kernel_exit causes a performance problem for micro-benchmarks that do no real work, (e.g. calling getpid() in a loop). This is because the ret_to_user loop has already masked IRQs so that the TIF_WORK_MASK thread flags can't change underneath it, adding disable_daif is an additional self-synchronising operation. In the future, the RAS APEI code may need to modify the TIF_WORK_MASK flags from an SError, in which case the ret_to_user loop must mask SError while it examines the flags. Disable all exceptions for return to EL1. For return to EL0 get the ret_to_user loop to leave all exceptions masked once it has done its work, this avoids an extra pstate-write. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Remove the local_{async,fiq}_{en,dis}able macros as they don't respect our newly defined order and are only used to set the flags for process context when we bring CPUs online. Add a helper to do this. The IRQ flag varies as we want it masked on the boot CPU until we are ready to handle interrupts. The boot CPU unmasks SError during early boot once it can print an error message. If we can print an error message about SError, we can do the same for FIQ. Debug exceptions are already enabled by __cpu_setup(), which has also configured MDSCR_EL1 to disable MDE and KDE. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Currently SError is always masked in the kernel. To support RAS exceptions using SError on hardware with the v8.2 RAS Extensions we need to unmask SError as much as possible. Let's define an order for masking and unmasking exceptions. 'dai' is memorable and effectively what we have today. Disabling debug exceptions should cause all other exceptions to be masked. Masking SError should mask irq, but not disable debug exceptions. Masking irqs has no side effects for other flags. Keeping to this order makes it easier for entry.S to know which exceptions should be unmasked. FIQ is never expected, but we mask it when we mask debug exceptions, and unmask it at all other times. Given masking debug exceptions masks everything, we don't need macros to save/restore that bit independently. Remove them and switch the last caller over to use the daif calls. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
There are a few places where we want to mask all exceptions. Today we do this in a piecemeal fashion, typically we expect the caller to have masked irqs and the arch code masks debug exceptions, ignoring serror which is probably masked. Make it clear that 'mask all exceptions' is the intention by adding helpers to do exactly that. This will let us unmask SError without having to add 'oh and SError' to these paths. Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Yisheng Xie 提交于
After commit 9e8e865b ("arm64: unify idmap removal"), we no need to flush tlb in suspend.c, so the included file tlbflush.h can be removed. Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Commit 42dbf54e ("arm64: consistently log ESR and page table") dumps page table entries for user faults hitting do_bad entries in the fault handler table. Whilst this shouldn't really happen in practice, it's not beyond the realms of possibility if e.g. running an old kernel on a new CPU. Generally, we want to avoid exposing physical addresses under the control of userspace (see commit bf396c09 ("arm64: mm: don't print out page table entries on EL0 faults")), so walk the page tables only on exceptions from EL1. Reported-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 31 10月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
The vdso tries to check for a NULL res pointer in __kernel_clock_getres, but only checks the lower 32 bits as is uses CBZ on the W register the res pointer is held in. Thus, if the res pointer happened to be aligned to a 4GiB boundary, we'd spuriously skip storing the timespec to it, while returning a zero error code to the caller. Prevent this by checking the whole pointer, using CBZ on the X register the res pointer is held in. Fixes: 9031fefd ("arm64: VDSO support") Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NAndrew Pinski <apinski@cavium.com> Reported-by: NMark Salyzyn <salyzyn@android.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 10月, 2017 2 次提交
-
-
由 Nick Desaulniers 提交于
Upon upgrading to binutils 2.27, we found that our lz4 and gzip compressed kernel images were significantly larger, resulting is 10ms boot time regressions. As noted by Rahul: "aarch64 binaries uses RELA relocations, where each relocation entry includes an addend value. This is similar to x86_64. On x86_64, the addend values are also stored at the relocation offset for relative relocations. This is an optimization: in the case where code does not need to be relocated, the loader can simply skip processing relative relocations. In binutils-2.25, both bfd and gold linkers did this for x86_64, but only the gold linker did this for aarch64. The kernel build here is using the bfd linker, which stored zeroes at the relocation offsets for relative relocations. Since a set of zeroes compresses better than a set of non-zero addend values, this behavior was resulting in much better lz4 compression. The bfd linker in binutils-2.27 is now storing the actual addend values at the relocation offsets. The behavior is now consistent with what it does for x86_64 and what gold linker does for both architectures. The change happened in this upstream commit: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613 Since a bunch of zeroes got replaced by non-zero addend values, we see the side effect of lz4 compressed image being a bit bigger. To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs" flag can be used: $ LDFLAGS="--no-apply-dynamic-relocs" make With this flag, the compressed image size is back to what it was with binutils-2.25. If the kernel is using ASLR, there aren't additional runtime costs to --no-apply-dynamic-relocs, as the relocations will need to be applied again anyway after the kernel is relocated to a random address. If the kernel is not using ASLR, then presumably the current default behavior of the linker is better. Since the static linker performed the dynamic relocs, and the kernel is not moved to a different address at load time, it can skip applying the relocations all over again." Some measurements: $ ld -v GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117 ^ $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb $ ld -v GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315 ^ pre patch: $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb post patch: $ ls -l vmlinux -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux $ ls -l Image.lz4-dtb -rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb By Siqi's measurement w/ gzip: binutils 2.27 with this patch (with --no-apply-dynamic-relocs): Image 41535488 Image.gz 13404067 binutils 2.27 without this patch (without --no-apply-dynamic-relocs): Image 41535488 Image.gz 14125516 Any compression scheme should be able to get better results from the longer runs of zeros, not just GZIP and LZ4. 10ms boot time savings isn't anything to get excited about, but users of arm64+compression+bfd-2.27 should not have to pay a penalty for no runtime improvement. Reported-by: NGopinath Elanchezhian <gelanchezhian@google.com> Reported-by: NSindhuri Pentyala <spentyala@google.com> Reported-by: NWei Wang <wvw@google.com> Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: NRahul Chaudhry <rahulchaudhry@google.com> Suggested-by: NSiqi Lin <siqilin@google.com> Suggested-by: NStephen Hines <srhines@google.com> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [will: added comment to Makefile] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The generic pte_access_permitted() implementation only checks for pte_present() (together with the write permission where applicable). However, for both kernel ptes and PROT_NONE mappings pte_present() also returns true on arm64 even though such mappings are not user accessible. Additionally, arm64 now supports execute-only user permission (PROT_EXEC) which is implemented by clearing the PTE_USER bit. With this patch the arm64 implementation of pte_access_permitted() checks for the PTE_VALID and PTE_USER bits together with writable access if applicable. Cc: <stable@vger.kernel.org> Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 10月, 2017 4 次提交
-
-
由 Will Deacon 提交于
PSTATE.Q only exists for AArch32, which can be referred to using COMPAT_PSR_Q_BIT. Remove PSR_Q_BIT, since the native bit doesn't exist in the architecture Tested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We can decode the PSTATE easily enough, so pretty-print it in register dumps. Tested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Printing raw pointer values in backtraces has potential security implications and are of questionable value anyway. This patch follows x86's lead and removes the "Exception stack:" dump from kernel backtraces, as well as converting PC/LR values to symbols such as "sysrq_handle_crash+0x20/0x30". Tested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
When we take a fault we can't handle, we try to dump some relevant information, but we're not consistent about doing so. In do_mem_abort(), we log the full ESR, but don't dump a page table walk. In __do_kernel_fault, we dump an attempted decoding of the ESR (but not the ESR itself) along with a page table walk. Let's try to make things more consistent by dumping the full ESR in mem_abort_decode(), and having do_mem_abort dump a page table walk. The existing dump of the ESR in do_mem_abort() is rendered redundant, and removed. Tested-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 10月, 2017 3 次提交
-
-
由 Dave Martin 提交于
Currently ASM_BUG() and its constituent macros define local assembler labels 0, 1 and 2 internally, which carries a high risk of clash with callers' labels and consequent mis-assembly. This patch gives the labels a big random offset to minimise the chance of such errors. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
Software Step exception is missing after stepping a trapped instruction. Ensure SPSR.SS gets set to 0 after emulating/skipping a trapped instruction before doing ERET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> [will: replaced AARCH32_INSN_SIZE with 4] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
Literal values are being used to set single stepping in mdscr from assembly code. There are already existing defines representing those values, use those instead of the literal values. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 10月, 2017 1 次提交
-
-
由 Mark Salyzyn 提交于
__memcpy_{to,from}io fall back to byte-at-a-time copying if both the source and destination pointers are not 8-byte aligned. Since one of the pointers always points at normal memory, this is unnecessary and detrimental to performance, so only do byte copying until we hit an 8-byte boundary for the device pointer. This change was motivated by performance issues in the pstore driver. On a test platform, measuring probe time for pstore, console buffer size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change managed to reduce it to 10-25ms, an improvement in boot time. Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMark Salyzyn <salyzyn@android.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
Now that the ARM ARM clearly specifies the rules for inferring the values of the ID register fields, fix the types of the feature bits we have in the kernel. As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the ID scheme for fields in ID registers" lists the registers to which the scheme applies along with the exceptions. This patch changes the relevant feature bits from FTR_EXACT to FTR_LOWER_SAFE to select the safer value. This will enable an older kernel running on a new CPU detect the safer option rather than completely disabling the feature. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 10月, 2017 1 次提交
-
-
由 Julien Thierry 提交于
Based on: ARM Architecture Reference Manual, ARMv8 (DDI 0487B.b). ARMv8.1 introduces the optional feature ARMv8.1-TTHM which can trigger a new type of memory abort. This exception is triggered when hardware update of page table flags is not atomic in regards to other memory accesses. Replace the corresponding unknown entry with a more accurate one. Cf: Section D10.2.28 ESR_ELx, Exception Syndrome Register (p D10-2381), section D4.4.11 Restriction on memory types for hardware updates on page tables (p D4-2116 - D4-2117). ARMv8.2 does not add new exception types, however it is worth mentioning that when obligatory feature RAS (optional for ARMv8.{0,1}) is implemented, exceptions related to "Synchronous parity or ECC error on memory access, not on translation table walk" become reserved and should not occur. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 10月, 2017 2 次提交
-
-
由 Will Deacon 提交于
When booting at EL2, ensure that we permit the EL1 host to sample physical addresses and physical counter values using SPE. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
SPE is part of the v8.2 architecture, so move its system register and field definitions into sysreg.h and the new PSB barrier into barrier.h Finally, move KVM over to using the generic definitions so that it doesn't have to open-code its own versions. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 10月, 2017 2 次提交
-
-
由 Julien Thierry 提交于
The current delay implementation uses the yield instruction, which is a hint that it is beneficial to schedule another thread. As this is a hint, it may be implemented as a NOP, causing all delays to be busy loops. This is the case for many existing CPUs. Taking advantage of the generic timer sending periodic events to all cores, we can use WFE during delays to reduce power consumption. This is beneficial only for delays longer than the period of the timer event stream. If timer event stream is not enabled, delays will behave as yield/busy loops. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
The arch timer configuration for a CPU might get reset after suspending said CPU. In order to reliably use the event stream in the kernel (e.g. for delays), we keep track of the state where we can safely consider the event stream as properly configured. After writing to cntkctl, we issue an ISB to ensure that subsequent delay loops can rely on the event stream being enabled. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3. Expose them to the userspace via HWCAPs and mrs emulation. SHA2-512 - Instruction support for SHA512 Hash algorithm (e.g SHA512H, SHA512H2, SHA512U0, SHA512SU1) SHA3 - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX). SM3 - Instruction support for Chinese cryptography algorithm SM3 SM4 - Instruction support for Chinese cryptography algorithm SM4 DP - Dot Product instructions (UDOT, SDOT). Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 10月, 2017 1 次提交
-
-
由 Ben Hutchings 提交于
Process personality always propagates across a fork(), but can change at an execve(). Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 10月, 2017 3 次提交
-
-
由 Matthieu CASTET 提交于
For example on arm64 board, this add info to "user" entries in vmallocinfo Before : [...] 0xffffff8008997000 0xffffff80089d8000 266240 user [...] Afer : [...] 0xffffff8008997000 0xffffff80089d8000 266240 atomic_pool_init+0x0/0x1d8 user [...] This help to debug mapping issues, and is consistent with others entries (ioremap, vmalloc, ...) that already provide caller. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatthieu CASTET <matthieu.castet@parrot.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Stephen Boyd 提交于
From what I can see there isn't anything about ACPI_APEI_SEA that means the arm64 architecture can or cannot support NMI safe cmpxchg or NMIs, so the 'if' condition here is not important. Let's remove it. Doing that allows us to support ftrace histograms via CONFIG_HIST_TRIGGERS that depends on the arch having the ARCH_HAVE_NMI_SAFE_CMPXCHG config selected. Cc: Tyler Baicar <tbaicar@codeaurora.org> Cc: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Cc: Dongjiu Geng <gengdongjiu@huawei.com> Acked-by: NJames Morse <james.morse@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently we inconsistently log identifying information for the boot CPU and secondary CPUs. For the boot CPU, we log the MIDR and MPIDR across separate messages, whereas for the secondary CPUs we only log the MIDR. In some cases, it would be useful to know the MPIDR of secondary CPUs, and it would be nice for these messages to be consistent. This patch ensures that in the primary and secondary boot paths, we log both the MPIDR and MIDR in a single message, with a consistent format. the MPIDR is consistently padded to 10 hex characters to cover Aff3 in bits 39:32, so that IDs can be compared easily. The newly redundant message in setup_arch() is removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Al Stone <ahs3@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: added '0x' prefixes consistently] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 10月, 2017 3 次提交
-
-
由 Kees Cook 提交于
As discussed at the Linux Security Summit, arm64 prefers to use REFCOUNT_FULL by default. This enables it for the architecture. Cc: hw.likun@huawei.com Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Thomas Meyer 提交于
Use vma_pages function on vma object instead of explicit computation. Found by coccinelle spatch "api/vma_pages.cocci" Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NThomas Meyer <thomas@m3y3r.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Masahiro Yamada 提交于
As you see in init/version.c, init_uts_ns.name.machine is initially set to UTS_MACHINE. There is no point to copy the same string. I dug the git history to figure out why this line is here. My best guess is like this: - This line has been around here since the initial support of arm64 by commit 9703d9d7 ("arm64: Kernel booting and initialisation"). If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match, arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the initial version of arch/arm64/Makefile missed to do that. Instead, the boot code copied "aarch64" to init_utsname()->machine. - Commit 94ed1f2c ("arm64: setup: report ELF_PLATFORM as the machine for utsname") replaced "aarch64" with ELF_PLATFORM to make "uname" to reflect the endianness. - ELF_PLATFORM does not help to provide the UTS machine name to rpm target, so commit cfa88c79 ("arm64: Set UTS_MACHINE in the Makefile") fixed it. The commit simply replaced ELF_PLATFORM with UTS_MACHINE, but missed the fact the string copy itself is no longer needed. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-