- 25 3月, 2015 6 次提交
-
-
由 Hanjun Guo 提交于
MADT contains the information for MPIDR which is essential for SMP initialization, parse the GIC cpu interface structures to get the MPIDR value and map it to cpu_logical_map(), and add enabled cpu with valid MPIDR into cpu_possible_map. ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and Parking protocol, but the Parking protocol is only specified for ARMv7 now, so make PSCI as the only way for the SMP boot protocol before some updates for the ACPI spec or the Parking protocol spec. Parking protocol patches for SMP boot will be sent to upstream when the new version of Parking protocol is ready. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Graeme Gregory 提交于
There are two flags: PSCI_COMPLIANT and PSCI_USE_HVC. When set, the former signals to the OS that the firmware is PSCI compliant. The latter selects the appropriate conduit for PSCI calls by toggling between Hypervisor Calls (HVC) and Secure Monitor Calls (SMC). FADT table contains such information in ACPI 5.1, FADT table was parsed in ACPI table init and copy to struct acpi_gbl_FADT, so use the flags in struct acpi_gbl_FADT for PSCI init. Since ACPI 5.1 doesn't support self defined PSCI function IDs, which means that only PSCI 0.2+ is supported in ACPI. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Graeme Gregory 提交于
If the early boot methods of acpi are happy that we have valid ACPI tables and acpi=force has been passed, then do not unflat devicetree effectively disabling further hardware probing from DT. CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Al Stone 提交于
This implements the following policy to decide whether ACPI should be used to boot the system: - acpi=off: ACPI will not be used to boot the system, even if there is no alternative available (e.g., device tree is empty) - acpi=force: only ACPI will be used to boot the system; if that fails, there will be no fallback to alternative methods (such as device tree) - otherwise, ACPI will be used as a fallback if the device tree turns out to lack a platform description; the heuristic to decide this is whether /chosen is the only node present at depth 1 CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Rafael J. Wysocki <rjw@rjwysocki.net> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NAl Stone <al.stone@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Hanjun Guo 提交于
CONFIG_ACPI depends CONFIG_PCI on x86 and ia64, in ARM64 server world we will have PCIe in most cases, but some of them may not, make CONFIG_ACPI depend CONFIG_PCI on ARM64 will satisfy both. With that case, we need some arch dependent PCI functions to access the config space before the PCI root bridge is created, and pci_acpi_scan_root() to create the PCI root bus. So introduce some stub function here to make ACPI core compile and revisit them later when implemented on ARM64. CC: Liviu Dudau <Liviu.Dudau@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Al Stone 提交于
As we want to get ACPI tables to parse and then use the information for system initialization, we should get the RSDP (Root System Description Pointer) first, it then locates Extended Root Description Table (XSDT) which contains all the 64-bit physical address that pointer to other boot-time tables. Introduce acpi.c and its related head file in this patch to provide fundamental needs of extern variables and functions for ACPI core, and then get boot-time tables as needed. - asm/acenv.h for arch specific ACPICA environments and implementation, It is needed unconditionally by ACPI core; - asm/acpi.h for arch specific variables and functions needed by ACPI driver core; - acpi.c for ARM64 related ACPI implementation for ACPI driver core; acpi_boot_table_init() is introduced to get RSDP and boot-time tables, it will be called in setup_arch() before paging_init(), so we should use eary_memremap() mechanism here to get the RSDP and all the table pointers. FADT Major.Minor version was introduced in ACPI 5.1, it is the same as ACPI version. In ACPI 5.1, some major gaps are fixed for ARM, such as updates in MADT table for GIC and SMP init, without those updates, we can not get the MPIDR for SMP init, and GICv2/3 related init information, so we can't boot arm64 ACPI properly with table versions predating 5.1. If firmware provides ACPI tables with ACPI version less than 5.1, OS has no way to retrieve the configuration data that is necessary to init SMP boot protocol and the GIC properly, so disable ACPI if we get an FADT table with version less that 5.1 when acpi_boot_table_init() called. CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NAl Stone <al.stone@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 3月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Another one for the big head.S spring cleaning: the label should be after the .align or it may point to the padding. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
If UEFI Runtime Services are available, they are preferred over direct PSCI calls or other methods to reset the system. For the reset case, we need to hook into machine_restart(), as the arm_pm_restart function pointer may be overwritten by modules. Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 2月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
The native (64-bit) sigval_t union contains sival_int (32-bit) and sival_ptr (64-bit). When a compat application invokes a syscall that takes a sigval_t value (as part of a larger structure, e.g. compat_sys_mq_notify, compat_sys_timer_create), the compat_sigval_t union is converted to the native sigval_t with sival_int overlapping with either the least or the most significant half of sival_ptr, depending on endianness. When the corresponding signal is delivered to a compat application, on big endian the current (compat_uptr_t)sival_ptr cast always returns 0 since sival_int corresponds to the top part of sival_ptr. This patch fixes copy_siginfo_to_user32() so that sival_int is copied to the compat_siginfo_t structure. Cc: <stable@vger.kernel.org> Reported-by: NBamvor Jian Zhang <bamvor.zhangjian@huawei.com> Tested-by: NBamvor Jian Zhang <bamvor.zhangjian@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 2月, 2015 3 次提交
-
-
由 Marc Zyngier 提交于
Patch 2f896d58 ("arm64: use fixmap for text patching") changed the way we patch the kernel text, using a fixmap when the kernel or modules are flagged as read only. Unfortunately, a flaw in the logic makes it fall over when patching modules without CONFIG_DEBUG_SET_MODULE_RONX enabled: [...] [ 32.032636] Call trace: [ 32.032716] [<fffffe00003da0dc>] __copy_to_user+0x2c/0x60 [ 32.032837] [<fffffe0000099f08>] __aarch64_insn_write+0x94/0xf8 [ 32.033027] [<fffffe000009a0a0>] aarch64_insn_patch_text_nosync+0x18/0x58 [ 32.033200] [<fffffe000009c3ec>] ftrace_modify_code+0x58/0x84 [ 32.033363] [<fffffe000009c4e4>] ftrace_make_nop+0x3c/0x58 [ 32.033532] [<fffffe0000164420>] ftrace_process_locs+0x3d0/0x5c8 [ 32.033709] [<fffffe00001661cc>] ftrace_module_init+0x28/0x34 [ 32.033882] [<fffffe0000135148>] load_module+0xbb8/0xfc4 [ 32.034044] [<fffffe0000135714>] SyS_finit_module+0x94/0xc4 [...] This is triggered by the use of virt_to_page() on a module address, which ends to pointing to Nowhereland if you're lucky, or corrupt your precious data if not. This patch fixes the logic by mimicking what is done on arm: - If we're patching a module and CONFIG_DEBUG_SET_MODULE_RONX is set, use vmalloc_to_page(). - If we're patching the kernel and CONFIG_DEBUG_RODATA is set, use virt_to_page(). - Otherwise, use the provided address, as we can write to it directly. Tested on 4.0-rc1 as a KVM guest. Reported-by: NRichard W.M. Jones <rjones@redhat.com> Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLaura Abbott <lauraa@codeaurora.org> Tested-by: NRichard W.M. Jones <rjones@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
An arm64 allmodconfig fails to build with GCC 5 due to __asmeq assertions in the PSCI firmware calling code firing due to mcount preambles breaking our assumptions about register allocation of function arguments: /tmp/ccDqJsJ6.s: Assembler messages: /tmp/ccDqJsJ6.s:60: Error: .err encountered /tmp/ccDqJsJ6.s:61: Error: .err encountered /tmp/ccDqJsJ6.s:62: Error: .err encountered /tmp/ccDqJsJ6.s:99: Error: .err encountered /tmp/ccDqJsJ6.s:100: Error: .err encountered /tmp/ccDqJsJ6.s:101: Error: .err encountered This patch fixes the issue by moving the PSCI calls out-of-line into their own assembly files, which are safe from the compiler's meddling fingers. Reported-by: NAndy Whitcroft <apw@canonical.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Nathan Lynch 提交于
The vdso implementation of clock_getres currently returns 0 (success) whenever a null timespec is provided by the caller, regardless of the clock id supplied. This behavior is incorrect. It should fall back to syscall when an unrecognized clock id is passed, even when the timespec argument is null. This ensures that clock_getres always returns an error for invalid clock ids. Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 2月, 2015 1 次提交
-
-
由 Pratyush Anand 提交于
ftrace_enable_ftrace_graph_caller and ftrace_disable_ftrace_graph_caller should replace B(jmp) instruction and not BL(call) instruction. Commit 9f1ae759("arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()") had a typo and used AARCH64_INSN_BRANCH_LINK instead of AARCH64_INSN_BRANCH_NOLINK. Either instruction will work, as the link register is saved/restored across the branch but this better matches the intention of the code. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 2月, 2015 1 次提交
-
-
由 Andrey Ryabinin 提交于
For instrumenting global variables KASan will shadow memory backing memory for modules. So on module loading we will need to allocate memory for shadow and map it at address in shadow that corresponds to the address allocated in module_alloc(). __vmalloc_node_range() could be used for this purpose, except it puts a guard hole after allocated area. Guard hole in shadow memory should be a problem because at some future point we might need to have a shadow memory at address occupied by guard hole. So we could fail to allocate shadow for module_alloc(). Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into __vmalloc_node_range(). Add new parameter 'vm_flags' to __vmalloc_node_range() function. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 2月, 2015 1 次提交
-
-
由 Andy Lutomirski 提交于
If an attacker can cause a controlled kernel stack overflow, overwriting the restart block is a very juicy exploit target. This is because the restart_block is held in the same memory allocation as the kernel stack. Moving the restart block to struct task_struct prevents this exploit by making the restart_block harder to locate. Note that there are other fields in thread_info that are also easy targets, at least on some architectures. It's also a decent simplification, since the restart code is more or less identical on all architectures. [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack] Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: David Miller <davem@davemloft.net> Acked-by: NRichard Weinberger <richard@nod.at> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Steven Miao <realmz6@gmail.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 1月, 2015 4 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM64_CPU_SUSPEND config option was introduced to make code providing context save/restore selectable only on platforms requiring power management capabilities. Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which in turn is set by the SUSPEND config option. The introduction of CPU_IDLE for arm64 requires that code configured by ARM64_CPU_SUSPEND (context save/restore) should be compiled in in order to enable the CPU idle driver to rely on CPU operations carrying out context save/restore. The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP) failed dependencies, which is not a clean way of handling the kernel configuration option. For these reasons, this patch removes the ARM64_CPU_SUSPEND config option and makes the context save/restore dependent on CPU_PM, which is selected whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies in the process. This way, code previously configured through ARM64_CPU_SUSPEND is compiled in whenever a power management subsystem requires it to be present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour expected on ARM64 kernels. The cpu_suspend and cpu_init_idle CPU operations are added only if CPU_IDLE is selected, since they are CPU_IDLE specific methods and should be grouped and defined accordingly. PSCI CPU operations are updated to reflect the introduced changes. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
As with x86, mark the sys_call_table const such that it will be placed in the .rodata section. This will cause attempts to modify the table (accidental or deliberate) to fail when strict page permissions are in place. In the absence of strict page permissions, there should be no functional change. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch moves the sys_rt_sigreturn_wrapper prototype to arch/arm64/kernel/sys.c and removes the asm/syscalls.h header. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Unlike the sys_call_table[], the compat one was implemented in sys32.S making it impossible to notice discrepancies between the number of compat syscalls and the __NR_compat_syscalls macro, the latter having to be defined in asm/unistd.h as including asm/unistd32.h would cause conflicts on __NR_* definitions. With this patch, incorrect __NR_compat_syscalls values will result in a build-time error. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 24 1月, 2015 5 次提交
-
-
由 Jiang Liu 提交于
Commit 9a46ad6d "smp: make smp_call_function_many() use logic similar to smp_call_function_single()" has unified the way to handle single and multiple cross-CPU function calls. Now only one interrupt is needed for architecture specific code to support generic SMP function call interfaces, so kill the redundant single function call interrupt. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Emulate deprecated 'setend' instruction for AArch32 bit tasks. setend [le/be] - Sets the endianness of EL0 On systems with CPUs which support mixed endian at EL0, the hardware support for the instruction can be enabled by setting the SCTLR_EL1.SED bit. Like the other emulated instructions it is controlled by an entry in /proc/sys/abi/. For more information see : Documentation/arm64/legacy_instructions.txt The instruction is emulated by setting/clearing the SPSR_EL1.E bit, which will be reflected in the PSTATE.E in AArch32 context. This patch also restores the native endianness for the execution of signal handlers, since the process could have changed the endianness. Note: All CPUs on the system must have mixed endian support at EL0. Once the handler is registered, hotplugging a CPU which doesn't support mixed endian, could lead to unexpected results/behavior in applications. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
As of now each insn_emulation has a cpu hotplug notifier that enables/disables the CPU feature bit for the functionality. This patch re-arranges the code, such that there is only one notifier that runs through the list of registered emulation hooks and runs their corresponding set_hw_mode. We do nothing when a CPU is dying as we will set the appropriate bits as it comes back online based on the state of the hooks. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> [catalin.marinas@arm.com: fix pr_warn compilation error] [catalin.marinas@arm.com: remove unnecessary "insn" check] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch keeps track of the mixed endian EL0 support across the system and provides helper functions to export it. The status is a boolean indicating whether all the CPUs on the system supports mixed endian at EL0. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
Add the necessary call to of_iommu_init. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 1月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: NArd Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
When kernel text is marked as read only, it cannot be modified directly. Use a fixmap to modify the text instead in a similar manner to x86 and arm. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 1月, 2015 1 次提交
-
-
由 Andre Przywara 提交于
ICC_SRE_EL1 is a system register allowing msr/mrs accesses to the GIC CPU interface for EL1 (guests). Currently we force it to 0, but for proper GICv3 support we have to allow guests to use it (depending on their selected virtual GIC model). So add ICC_SRE_EL1 to the list of saved/restored registers on a world switch, but actually disallow a guest to change it by only restoring a fixed, once-initialized value. This value depends on the GIC model userland has chosen for a guest. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 17 1月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
When remapping the UEFI memory map using ioremap_cache(), we have to deal with potential failure. Note that, even if the common case is for ioremap_cache() to return the existing linear mapping of the memory map, we cannot rely on that to be always the case, e.g., in the presence of a mem= kernel parameter. At the same time, remove a stale comment and move the memmap code together. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
This ensures all stub component are freed when the kernel proper is done booting, by prefixing the names of all ELF sections that have the SHF_ALLOC attribute with ".init". This approach ensures that even implicitly emitted allocated data (like initializer values and string literals) are covered. At the same time, remove some __init annotations in the stub that have now become redundant, and add the __init annotation to handle_kernel_image which will now trigger a section mismatch warning without it. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 15 1月, 2015 3 次提交
-
-
由 Mark Rutland 提交于
To aid the developer when something triggers an unexpected exception, decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't tell the developer the specifics of the exception encoded in the remaining IL and ISS bits, but it can be helpful to distinguish between exception classes (e.g. SError and a data abort) without having to manually decode the field, which can be tiresome. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that we have common ESR_ELx_* macros, move the core arm64 code over to them. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
This patch adds support for cacheinfo on ARM64. On ARMv8, the cache hierarchy can be identified through Cache Level ID (CLIDR) register while the cache geometry is provided by Cache Size ID (CCSIDR) register. Since the architecture doesn't provide any way of detecting the cpus sharing particular cache, device tree is used for the same purpose. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Now that we are calling SetVirtualAddressMap() from the stub, there is no need to reserve boot-only memory regions, which implies that there is also no reason to free them again later. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
In order to support kexec, the kernel needs to be able to deal with the state of the UEFI firmware after SetVirtualAddressMap() has been called. To avoid having separate code paths for non-kexec and kexec, let's move the call to SetVirtualAddressMap() to the stub: this will guarantee us that it will only be called once (since the stub is not executed during kexec), and ensures that the UEFI state is identical between kexec and normal boot. This implies that the layout of the virtual mapping needs to be created by the stub as well. All regions are rounded up to a naturally aligned multiple of 64 KB (for compatibility with 64k pages kernels) and recorded in the UEFI memory map. The kernel proper reads those values and installs the mappings in a dedicated set of page tables that are swapped in during UEFI Runtime Services calls. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NMatt Fleming <matt.fleming@intel.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 09 1月, 2015 1 次提交
-
-
由 Andy Lutomirski 提交于
On x86_64, at least, task_pt_regs may be only partially initialized in many contexts, so x86_64 should not use it without extra care from interrupt context, let alone NMI context. This will allow x86_64 to override the logic and will supply some scratch space to use to make a cleaner copy of user regs. Tested-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: chenggang.qcg@taobao.com Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Salter <msalter@redhat.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The early ioremap support introduced by patch bf4b558e ("arm64: add early_ioremap support") failed to add a call to early_ioremap_reset() at an appropriate time. Without this call, invocations of early_ioremap etc. that are done too late will go unnoticed and may cause corruption. This is exactly what happened when the first user of this feature was added in patch f84d0275 ("arm64: add EFI runtime services"). The early mapping of the EFI memory map is unmapped during an early initcall, at which time the early ioremap support is long gone. Fix by adding the missing call to early_ioremap_reset() to setup_arch(), and move the offending early_memunmap() to right after the point where the early mapping of the EFI memory map is last used. Fixes: f84d0275 ("arm64: add EFI runtime services") Cc: <stable@vger.kernel.org> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 1月, 2015 1 次提交
-
-
由 Paul Walmsley 提交于
On next-20150105, defconfig compilation breaks with: arch/arm64/kernel/smp_spin_table.c:80:2: error: implicit declaration of function ‘ioremap_cache’ [-Werror=implicit-function-declaration] arch/arm64/kernel/smp_spin_table.c:92:2: error: implicit declaration of function ‘writeq_relaxed’ [-Werror=implicit-function-declaration] arch/arm64/kernel/smp_spin_table.c:101:2: error: implicit declaration of function ‘iounmap’ [-Werror=implicit-function-declaration] Fix by including asm/io.h, which contains definitions or prototypes for these macros or functions. This second version incorporates a comment from Mark Rutland <mark.rutland@arm.com> to keep the includes in alphabetical order by filename. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-