- 24 2月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
When we're updating a register's sys_val, we use arm64_ftr_value() to find the new field value. We use cpuid_feature_extract_field() to find the new value, but this implicitly assumes a 4-bit field, so we may extract more bits than we mean to for fields like CTR_EL0.L1ip. This affects update_cpu_ftr_reg(), where we may extract erroneous values for ftr_cur and ftr_new. Depending on the additional bits extracted in either case, we may erroneously detect that the value is mismatched, and we'll try to compute a new safe value. Dependent on these extra bits and feature type, arm64_ftr_safe_value() may pessimistically select the always-safe value, or may erroneously choose either the extracted cur or new value as the safe option. The extra bits will subsequently be masked out in arm64_ftr_set_value(), so we may choose a higher value, yet write back a lower one. Fix this by passing the width down explicitly in arm64_ftr_value(), so we always extract the correct amount. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 2月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
In A64, XZR and the SP share the same encoding (31), and whether an instruction accesses XZR or SP for a particular register parameter depends on the definition of the instruction. We store the SP in pt_regs::regs[31], and thus when emulating instructions, we must be careful to not erroneously read from or write back to the saved SP. Unfortunately, we often fail to be this careful. In all cases, instructions using a transfer register parameter Xt use this to refer to XZR rather than SP. This patch adds helpers so that we can more easily and consistently handle these cases. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 2月, 2017 1 次提交
-
-
由 Christopher Covington 提交于
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum is triggered, page table entries using the new translation table base address (BADDR) will be allocated into the TLB using the old ASID. All circumstances leading to the incorrect ASID being cached in the TLB arise when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory operation is in the process of performing a translation using the specific TTBRx_EL1 being written, and the memory operation uses a translation table descriptor designated as non-global. EL2 and EL3 code changing the EL1&0 ASID is not subject to this erratum because hardware is prohibited from performing translations from an out-of-context translation regime. Consider the following pseudo code. write new BADDR and ASID values to TTBRx_EL1 Replacing the above sequence with the one below will ensure that no TLB entries with an incorrect ASID are used by software. write reserved value to TTBRx_EL1[ASID] ISB write new value to TTBRx_EL1[BADDR] ISB write new value to TTBRx_EL1[ASID] ISB When the above sequence is used, page table entries using the new BADDR value may still be incorrectly allocated into the TLB using the reserved ASID. Yet this will not reduce functionality, since TLB entries incorrectly tagged with the reserved ASID will never be hit by a later instruction. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 2月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Currently in arm64's copy_{to,from}_user, we only check the source/destination object size if access_ok() tells us the user access is permissible. However, in copy_from_user() we'll subsequently zero any remainder on the destination object. If we failed the access_ok() check, that applies to the whole object size, which we didn't check. To ensure that we catch that case, this patch hoists check_object_size() to the start of copy_from_user(), matching __copy_from_user() and __copy_to_user(). To make all of our uaccess copy primitives consistent, the same is done to copy_to_user(). Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 2月, 2017 1 次提交
-
-
由 Pratyush Anand 提交于
Atomic operation function symbols are exported,when CONFIG_ARM64_LSE_ATOMICS is defined. Prefix them with notrace, so that an user can not trace these functions. Tracing these functions causes kernel crash. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 2月, 2017 1 次提交
-
-
由 Will Deacon 提交于
The SPE buffer is virtually addressed, using the page tables of the CPU MMU. Unusually, this means that the EL0/1 page table may be live whilst we're executing at EL2 on non-VHE configurations. When VHE is in use, we can use the same property to profile the guest behind its back. This patch adds the relevant disabling and flushing code to KVM so that the host can make use of SPE without corrupting guest memory, and any attempts by a guest to use SPE will result in a trap. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 2月, 2017 2 次提交
-
-
由 Christopher Covington 提交于
During a TLB invalidate sequence targeting the inner shareable domain, Falkor may prematurely complete the DSB before all loads and stores using the old translation are observed. Instruction fetches are not subject to the conditions of this erratum. If the original code sequence includes multiple TLB invalidate instructions followed by a single DSB, onle one of the TLB instructions needs to be repeated to work around this erratum. While the erratum only applies to cases in which the TLBI specifies the inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or stronger (OSH, SYS), this changes applies the workaround overabundantly-- to local TLBI, DSB NSH sequences as well--for simplicity. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
Commit cab15ce6 ("arm64: Introduce execute-only page access permissions") allowed a valid user PTE to have the PTE_USER bit clear. As a consequence, the pte_valid_not_user() macro in set_pte() was replaced with pte_valid_global() under the assumption that only user pages have the nG bit set. EFI mappings, however, also have the nG bit set and set_pte() wrongly ignores issuing the DSB+ISB. This patch reinstates the pte_valid_not_user() macro and adds the PTE_UXN bit check since all kernel mappings have this bit set. For clarity, pte_exec() is renamed to pte_user_exec() as it only checks for the absence of PTE_UXN. Consequently, the user executable check in set_pte_at() drops the pte_ng() test since pte_user_exec() is sufficient. Fixes: cab15ce6 ("arm64: Introduce execute-only page access permissions") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 1月, 2017 1 次提交
-
-
由 Shanker Donthineni 提交于
Define the MIDR implementer and part number field values for the Qualcomm Datacenter Technologies Falkor processor version 1 in the usual manner. Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 1月, 2017 1 次提交
-
-
由 Robert Richter 提交于
Definition of cpu ranges are hard to read if the cpu variant is not zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware revision of a cpu including variant and (minor) revision. Signed-off-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 1月, 2017 5 次提交
-
-
由 Laura Abbott 提交于
x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks on virt_to_phys calls. The goal is to catch users who are calling virt_to_phys on non-linear addresses immediately. This inclues callers using virt_to_phys on image addresses instead of __pa_symbol. As features such as CONFIG_VMAP_STACK get enabled for arm64, this becomes increasingly important. Add checks to catch bad virt_to_phys usage. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
__pa_symbol is technically the marcro that should be used for kernel symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which will do bounds checking. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
virt_to_pfn lacks a cast at the top level. Don't rely on __virt_to_phys and explicitly cast to unsigned long. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
Several macros for various x_to_y exist outside the bounds of an __ASSEMBLY__ guard. Move them in preparation for support for CONFIG_DEBUG_VIRTUAL. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
This patch adds the hook for emulating MRS instruction to export the 'user visible' value of supported system registers. We emulate only the following id space for system registers: Op0=3, Op1=0, CRn=0, CRm=[0, 4-7] The rest will fall back to SIGILL. This capability is also advertised via a new HWCAP_CPUID. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: add missing static keyword to enable_mrs_emulation] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 1月, 2017 4 次提交
-
-
由 Suzuki K Poulose 提交于
Track the user visible fields of a CPU feature register. This will be used for exposing the value to the userspace. All the user visible fields of a feature register will be passed on as it is, while the others would be filled with their respective safe value. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Add a helper to extract the register field from a given instruction. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Define helper macros to extract op0, op1, CRn, CRm & op2 for a given sys_reg id. While at it remove the explicit masking only used for Op0. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Document the rules for choosing the safe value for different types of features. Cc: Dave Martin <dave.martin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 1月, 2017 2 次提交
-
-
由 Will Deacon 提交于
The statistical profiling extension (SPE) is an optional feature of ARMv8.1 and is unlikely to be supported by all of the CPUs in a heterogeneous system. This patch updates the cpufeature checks so that such systems are not tainted as unsupported. Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Since its introduction, the UAO enable call was broken, and useless. commit 2a6dcb2b ("arm64: cpufeature: Schedule enable() calls instead of calling them via IPI"), fixed the framework so that these calls are scheduled, so that they can modify PSTATE. Now it is just useless. Remove it. UAO is enabled by the code patching which causes get_user() and friends to use the 'ldtr' family of instructions. This relies on the PSTATE.UAO bit being set to match addr_limit, which we do in uao_thread_switch() called via __switch_to(). All that is needed to enable UAO is patch the code, and call schedule(). __apply_alternatives_multi_stop() calls stop_machine() when it modifies the kernel text to enable the alternatives, (including the UAO code in uao_thread_switch()). Once stop_machine() has finished __switch_to() is called to reschedule the original task, this causes PSTATE.UAO to be set appropriately. An explicit enable() call is not needed. Reported-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com>
-
- 05 1月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Commit c02433dd ("arm64: split thread_info from task stack") inverted the relationship between get_current() and current_thread_info(), with sp_el0 now holding the current task_struct rather than the current thead_info. The new implementation of get_current() prevents the compiler from being able to optimize repeated calls to either, resulting in a noticeable penalty in some microbenchmarks. This patch restores the previous optimisation by implementing get_current() in the same way as our old current_thread_info(), using a non-volatile asm statement. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NDavidlohr Bueso <dbueso@suse.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 12月, 2016 1 次提交
-
-
由 Al Viro 提交于
Split asm-only parts of arm64 uaccess.h into a new header and use that from *.S. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 12月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 12月, 2016 2 次提交
-
-
由 Lv Zheng 提交于
Since all users are cleaned up, remove the 2 deprecated APIs due to no users. As a Linux variable rather than an ACPICA variable, acpi_gbl_permanent_mmap is renamed to acpi_permanent_mmap to have a consistent coding style across entire Linux ACPI subsystem. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Alexander Popov 提交于
Introduce kaslr_offset() similar to x86_64 to fix kcov. [ Updated by Will Deacon ] Link: http://lkml.kernel.org/r/1481417456-28826-2-git-send-email-alex.popov@linux.comSigned-off-by: NAlexander Popov <alex.popov@linux.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh@kernel.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Jon Masters <jcm@redhat.com> Cc: David Daney <david.daney@cavium.com> Cc: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Nicolai Stange <nicstange@gmail.com> Cc: James Morse <james.morse@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Alexander Popov <alex.popov@linux.com> Cc: syzkaller <syzkaller@googlegroups.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 12月, 2016 1 次提交
-
-
由 Boris Ostrovsky 提交于
acpi_map_pxm_to_node() unconditially maps nodes even when NUMA is turned off. So acpi_get_node() might return a node > 0, which is fatal when NUMA is disabled as the rest of the kernel assumes that only node 0 exists. Expose numa_off to the acpi code and return NUMA_NO_NODE when it's set. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: linux-ia64@vger.kernel.org Cc: catalin.marinas@arm.com Cc: rjw@rjwysocki.net Cc: will.deacon@arm.com Cc: linux-acpi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: lenb@kernel.org Link: http://lkml.kernel.org/r/1481602709-18260-1-git-send-email-boris.ostrovsky@oracle.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 13 12月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
Commit 4b65a5db ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") added conditional user access enable/disable. Unfortunately, a typo prevents the PAN bit from being cleared for user access functions. Restore the PAN functionality by adding the missing '!'. Fixes: b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") Reported-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
.inst being largely broken with older binutils, it'd be better not to emit it altogether when detecting such configuration (as it leads to all kind of horrors when using alternatives). Generalize the __emit_inst macro and use it extensively in asm/sysreg.h, and make it generate a .long when a broken gas is detected. The disassembly will be crap, but at least we can write semi-sane code. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 12月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
The asm/opcodes.h file is now gone, but probes.h still references it for not obvious reason. Removing the #include directive fixes the compilation. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 12月, 2016 1 次提交
-
-
由 Marc Zyngier 提交于
ARM and arm64 Xen ports share a number of headers, leading to packaging issues when these headers needs to be exported, as it breaks the reasonable requirement that an architecture port has self-contained headers. Fix the issue by moving the 5 header files to include/xen/arm, and keep local placeholders to include the relevant files. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org>
-
- 02 12月, 2016 3 次提交
-
-
由 Marc Zyngier 提交于
The opcodes.h drags in a lot of definition from the 32bit port, most of which is not required at all. Clean things up a bit by moving the bare minimum of what is required next to the actual users, and drop the include file. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
Under CONFIG_DEBUG_PREEMPT=y, this_cpu_ptr() ends up calling back into raw_smp_processor_id(), resulting in some hilariously catastrophic infinite recursion. In the normal case, we have: #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr) and everything is dandy. However for CONFIG_DEBUG_PREEMPT, this_cpu_ptr() is defined in terms of my_cpu_offset, wherein the fun begins: #define my_cpu_offset per_cpu_offset(smp_processor_id()) ... #define smp_processor_id() debug_smp_processor_id() ... notrace unsigned int debug_smp_processor_id(void) { return check_preemption_disabled("smp_processor_id", ""); ... notrace static unsigned int check_preemption_disabled(const char *what1, const char *what2) { int this_cpu = raw_smp_processor_id(); and bang. Use raw_cpu_ptr() directly to avoid that. Fixes: 57c82954 ("arm64: make cpu number a percpu variable") Reported-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Tomasz Nowicki 提交于
This patch provides APEI arch-specific bits for ARM64 Meanwhile, (1) Move HEST type (ACPI_HEST_TYPE_IA32_CORRECTED_CHECK) checking to a generic place. (2) Select HAVE_ACPI_APEI when EFI and ACPI is set on ARM64, because arch_apei_get_mem_attribute is using efi_mem_attributes() on ARM64. Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Tested-by: NJonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: NFu Wei <fu.wei@linaro.org> [ Fu Wei: improve && upstream ] Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTyler Baicar <tbaicar@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 29 11月, 2016 4 次提交
-
-
由 Vladimir Murzin 提交于
readq and writeq type of assessors are not supported in AArch32, so we need to specialise them and glue later with series of 32-bit accesses on AArch32 side. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Vladimir Murzin 提交于
It'd be better to switch to CMA... but before that done redirect flush_dcache operation, so 32-bit implementation could be wired latter. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Will Deacon 提交于
The workaround for Cavium ThunderX erratum 23154 has a homebrew pipeflush built out of NOP sequences around the read of the IAR. This patch converts the code to use the new nops macro, which makes it a little easier to read. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Will Deacon 提交于
The GIC system registers are accessed using open-coded wrappers around the mrs_s/msr_s asm macros. This patch moves the code over to the {read,wrote}_sysreg_s accessors instead, reducing the amount of explicit asm blocks in the arch headers. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 24 11月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
The flush_cache_range() function (similarly for flush_cache_page()) is called when the kernel is changing an existing VA->PA mapping range to either a new PA or to different attributes. Since ARMv8 has PIPT-like D-caches, this function does not need to perform any D-cache maintenance. The I-cache maintenance is already handled via set_pte_at() and flush_cache_range() cannot anyway guarantee that there are no cache lines left after invalidation due to the speculative loads. This patch makes flush_cache_range() a no-op. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 11月, 2016 1 次提交
-
-
由 Catalin Marinas 提交于
When the TTBR0 PAN feature is enabled, the kernel entry points need to disable access to TTBR0_EL1. The PAN status of the interrupted context is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22). Restoring access to TTBR0_EL1 is done on exception return if returning to user or returning to a context where PAN was disabled. Context switching via switch_mm() must defer the update of TTBR0_EL1 until a return to user or an explicit uaccess_enable() call. Special care needs to be taken for two cases where TTBR0_EL1 is set outside the normal kernel context switch operation: EFI run-time services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap). Code has been added to avoid deferred TTBR0_EL1 switching as in switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the special TTBR0_EL1. User cache maintenance (user_cache_maint_handler and __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the operations are performed by user virtual address. This patch also removes a stale comment on the switch_mm() function. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-