- 21 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
We have a micro-optimisation on the fast syscall return path where we take care to keep x0 live with the return value from the syscall so that we can avoid restoring it from the stack. The benefit of doing this is fairly suspect, since we will be restoring x1 from the stack anyway (which lives adjacent in the pt_regs structure) and the only additional cost is saving x0 back to pt_regs after the syscall handler, which could be seen as a poor man's prefetch. More importantly, this causes issues with the context tracking code. The ct_user_enter macro ends up branching into C code, which is free to use x0 as a scratch register and consequently leads to us returning junk back to userspace as the syscall return value. Rather than special case the context-tracking code, this patch removes the questionable optimisation entirely. Cc: <stable@vger.kernel.org> Cc: Larry Bassel <larry.bassel@linaro.org> Cc: Kevin Hilman <khilman@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 8月, 2015 2 次提交
-
-
由 Will Deacon 提交于
The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
In order to guarantee that the patched instruction stream is visible to a CPU, that CPU must execute an isb instruction after any related cache maintenance has completed. The instruction patching routines in kernel/insn.c get this right for things like jump labels and ftrace, but the alternatives patching omits it entirely leaving secondary cores in a potential limbo between the old and the new code. This patch adds an isb following the secondary polling loop in the altenatives patching. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 8月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
To enable sharing with arm, move the core PSCI framework code to drivers/firmware. This results in a minor gain in lines of code, but this will quickly be amortised by the removal of code currently duplicated in arch/arm. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 8月, 2015 1 次提交
-
-
由 Sudeep Holla 提交于
Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs") accidentally retained code for !CONFIG_SMP in cpu_resume function. This resulted in the hash index being zeroed in x7 after proper computation, which is then used to get the cpu context pointer while resuming. This patch removes the remanant code and restores back the cpu suspend/ resume functionality. Fixes: 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs") Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 31 7月, 2015 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
On ARM64 PROBE_ONLY PCI systems resources are not currently claimed, therefore they can't be enabled since they do not have a valid parent pointer; this in turn prevents enabling PCI devices on ARM64 PROBE_ONLY systems, causing PCI devices initialization to fail. To solve this issue, resources must be claimed when devices are added on PROBE_ONLY systems, which ensures that the resource hierarchy is validated and the resource tree is sane, but this requires changes in the ARM64 resource management that can affect adversely existing PCI set-ups (claiming resources on !PROBE_ONLY systems might break existing ARM64 PCI platform implementations). As a temporary solution in preparation for a proper resources claiming implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64, this patch adds a pcibios_enable_device() arch implementation that simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM behaviour). This is always a safe thing to do because on PROBE_ONLY systems the configuration space set-up can be considered immutable, and it is in preparation of proper resource claiming that would finally validate the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY systems. For !PROBE_ONLY systems resources enablement in pcibios_enable_device() on ARM64 is implemented as in current PCI core, leaving the behaviour unchanged. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When patching the kernel text with alternatives, we may end up patching parts of the stop_machine state machine (e.g. atomic_dec_and_test in ack_state) and consequently corrupt the instruction stream of any secondary CPUs. This patch passes the cpu_online_mask to stop_machine, forcing all of the CPUs into our own callback which can place the secondary cores into a dumb (but safe!) polling loop whilst the patching is carried out. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 7月, 2015 2 次提交
-
-
由 Jonas Rabenstein 提交于
Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs") forces SMP on arm64. To build the necessary objects for SMP, they were added to the arm64-obj-y rule in arch/arm64/kernel/Makefile, without removing the arm64-obj-$(CONFIG_SMP) rule. Remove redundant object file list depending on always-yes CONFIG_SMP in arch/arm64/kernel/Makefile. Signed-off-by: NJonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jonas Rabenstein 提交于
Commit 4b3dc967 ("arm64: force CONFIG_SMP=y and remove redundant and therfore can not be selected anymore. Remove dead #ifdef-block depending on UP_LATE_INIT in arch/arm64/kernel/setup.c Signed-off-by: NJonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> [will: kill do_post_cpus_up_work altogether] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a a contraction of "element". This conflicts with 'enum debug_el' in our asm/debug-monitors.h header file, where "el" stands for Exception Level. The result is build failure when targetting allmodconfig, so rename our enum to 'dbg_active_el' to be slightly more explicit about what it is. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 25 次提交
-
-
由 Will Deacon 提交于
cpuid_feature_extract_field takes care of the fiddly ID register field sign-extension, so use that instead of rolling our own version. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Rework the cpufeature detection to support ISAR0 and use that for detecting the presence of LSE atomics. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Other CPU features follow an 'ARM64_HAS_*' naming scheme, so do the same for the LSE atomics. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
On CPUs which support the LSE atomic instructions introduced in ARMv8.1, it makes sense to use them in preference to ll/sc sequences. This patch introduces runtime patching of atomic_t and atomic64_t routines so that the call-site for the out-of-line ll/sc sequences is patched with an LSE atomic instruction when we detect that the CPU supports it. If binutils is not recent enough to assemble the LSE instructions, then the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Add a CPU feature for the LSE atomic instructions, so that they can be patched in at runtime when we detect that they are supported. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARM v8.1 architecture introduces new atomic instructions to the A64 instruction set for things like cmpxchg, so advertise their availability to userspace using a hwcap. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The generic slowpath WARN implementation prints a backtrace, but the report_bug() based implementation does not, opting to print the registers instead which is generally not as useful. Ideally, report_bug() should be fixed to make the behaviour more consistent, but in the meantime this patch generates a backtrace directly from the arm64 backend instead so that this functionality is not lost with the migration to report_bug(). As a side-effect, the backtrace will be outside the oops end marker, but that's hard to avoid without modifying generic code. This patch can go away if report_bug() grows the ability in the future to generate a backtrace directly or call an arch hook at the appropriate time. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
Currently, the minimal default BUG() implementation from asm- generic is used for arm64. This patch uses the BRK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG and WARN sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit offsets instead of absolute pointers. (Note that this limits the kernel size to 2GB.) Traps are registered at arch_initcall time for aarch64, but BUG has minimal real dependencies and it is desirable to be able to generate bug splats as early as possible. This patch redirects all debug exceptions caused by BRK directly to bug_handler() until the full debug exception support has been initialised. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The way the KGDB_DYN_BRK_INS_BYTEx macros are declared is more complex than it needs to be. Also, the macros are only used in one place, which is arch-specific anyway. This patch refactors the macros to simplify them, and exposes an argument so that we can have a single macro instead of 4. As a side effect, this patch also fixes some anomalous spellings of "KGDB". These changes alter the compile types of some integer constants that are harmless but trigger truncation warnings in gcc when assigning to 32-bit variables. This patch adds an explicit cast for the affected cases. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave P Martin 提交于
The naming of DBG_ESR_VAL_BRK is inconsistent with the way other similar macros are named. This patch makes the naming more consistent, and appends "64" as a reminder that this ESR pattern only matches from AArch64 state. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 yalin wang 提交于
A little change to patch_map() function, use set_fixmap_offset() to make code more clear. Signed-off-by: Nyalin wang <yalin.wang2010@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
When allocating memory for the kernel image, try the AllocatePages() boot service to obtain memory at the preferred offset of 'dram_base + TEXT_OFFSET', and only revert to efi_low_alloc() if that fails. This is the only way to allocate at the base of DRAM if DRAM starts at 0x0, since efi_low_alloc() refuses to allocate at 0x0. Tested-by: NHaojian Zhuang <haojian.zhuang@linaro.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
Since both CONFIG_ACPI and CONFIG_OF are enabled when booting using ACPI tables on ARM64 platforms, we get few device tree warnings which are not valid for ACPI boot. We can use of_have_populated_dt to check if the device tree is populated or not before throwing out those errors. This patch uses of_have_populated_dt to remove non legitimate device tree warning when booting using ACPI tables. Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
When a new cpu feature is available, the cpu feature bits will have some initial value, which is incremented when the feature is updated. This patch changes 'register_value' to be 'min_field_value', and checks the feature bits value (interpreted as a signed int) is greater than this minimum. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
This patch adds an 'enable()' callback to cpu capability/feature detection, allowing features that require some setup or configuration to get this opportunity once the feature has been detected. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1 register. This patch moves this function into header a file. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Daniel Thompson 提交于
Convert the dynamic patching for ARM64_WORKAROUND_845719 over to the newly added alternative assembler macros. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Most of the cache events an architecture might support do not map well to those provided by the ARM architecture, and as such most entries in the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid physical event identifier, the *_UNSUPPORTED macros expand to a non-zero value and thus each unsupported event must be explicitly initialised as such. This leads to large diffs when adding support for a new CPU, and makes it difficult to spot the important information. This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED macros to initialise all entries to *_UNSUPPORTED before overriding this for the specific events we actually support, resulting in a significant source code reduction. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We currently bundle the callchain handling code with the PMU code, despite the fact the two are distinct, and the former can be useful even in the absence of the latter. Follow the example of arch/arm and factor the callchain handling into its own file dependent on CONFIG_PERF_EVENTS rather than CONFIG_HW_PERF_EVENTS. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The compat ptrace interface allows access to the TLS register, hardware breakpoints and watchpoints, syscall number. However, a native task using the native ptrace interface to debug compat tasks (e.g. multi-arch gdb) only has access to the general and VFP register sets. The compat ptrace interface cannot be accessed from a native task. This patch adds a new user_aarch32_ptrace_view which contains the TLS, hardware breakpoint/watchpoint and syscall number regsets in addition to the existing GPR and VFP regsets. This view is backwards compatible with the previous kernels. Core dumping of 32-bit tasks and compat ptrace are not affected since the original user_aarch32_view is preserved. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NYao Qi <yao.qi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Rohit Thapliyal 提交于
On 64bit kernel, the dump_mem gives 32 bit addresses on the stack dump. This gives unorganized information regarding the 64bit values on the stack. Hence, modified to get a complete 64bit memory dump. With patch: [ 93.534801] Process insmod (pid: 1587, stack limit = 0xffffffc976be4058) [ 93.541441] Stack: (0xffffffc976be7cf0 to 0xffffffc976be8000) [ 93.547136] 7ce0: ffffffc976be7d00 ffffffc00008163c [ 93.554898] 7d00: ffffffc976be7d40 ffffffc0000f8a44 ffffffc00098ef38 ffffffbffc000088 [ 93.562659] 7d20: ffffffc00098ef50 ffffffbffc0000c0 0000000000000001 ffffffbffc000070 [ 93.570419] 7d40: ffffffc976be7e40 ffffffc0000f935c 0000000000000000 000000002b424090 [ 93.578179] 7d60: 000000002b424010 0000007facc555f4 0000000080000000 0000000000000015 [ 93.585937] 7d80: 0000000000000116 0000000000000069 ffffffc00097b000 ffffffc976be4000 [ 93.593694] 7da0: 0000000000000064 0000000000000072 000000000000006e 000000000000003f [ 93.601453] 7dc0: 000000000000feff 000000000000fff1 ffffffbffc002028 0000000000000124 [ 93.609211] 7de0: ffffffc976be7e10 0000000000000001 ffffff8000000000 ffffffbbffff0000 [ 93.616969] 7e00: ffffffc976be7e60 0000000000000000 0000000000000000 0000000000000000 [ 93.624726] 7e20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 93.632484] 7e40: 0000007fcc474550 ffffffc0000841ec 000000002b424010 0000007facda0710 [ 93.640241] 7e60: ffffffffffffffff ffffffc0000be6dc ffffff80007d2000 000000000001c010 [ 93.647999] 7e80: ffffff80007e0ae0 ffffff80007e09d0 ffffff80007edf70 0000000000000288 [ 93.655757] 7ea0: 00000000000002e8 0000000000000000 0000000000000000 0000001c0000001b [ 93.663514] 7ec0: 0000000000000009 0000000000000007 000000002b424090 000000000001c010 [ 93.671272] 7ee0: 000000002b424010 0000007faccd3a48 0000000000000000 0000000000000000 [ 93.679030] 7f00: 0000007fcc4743f8 0000007fcc4743f8 0000000000000069 0000000000000003 [ 93.686787] 7f20: 0101010101010101 0000000000000004 0000000000000020 00000000000003f3 [ 93.694544] 7f40: 0000007facb95664 0000007facda7030 0000007facc555d0 0000000000498378 [ 93.702301] 7f60: 0000000000000000 000000002b424010 0000007facda0710 000000002b424090 [ 93.710058] 7f80: 0000007fcc474698 0000000000498000 0000007fcc474ebb 0000000000474f58 [ 93.717815] 7fa0: 0000000000498000 0000000000000000 0000000000000000 0000007fcc474550 [ 93.725573] 7fc0: 00000000004104bc 0000007fcc474430 0000007facc555f4 0000000080000000 [ 93.733330] 7fe0: 000000002b424090 0000000000000069 0950020128000244 4104000008000004 [ 93.741084] Call trace: The above output makes a debugger life a lot more easier. Signed-off-by: NRohit Thapliyal <r.thapliyal@samsung.com> Signed-off-by: NManinder Singh <maninder1.s@samsung.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
arch_find_n_match_cpu_physical_id parses the device tree to get the device node for a given logical cpu index. However, since ARM PMUs get probed after the CPU device nodes are stashed while registering the cpus, we can use of_cpu_device_node_get to avoid another DT parse. This patch replaces arch_find_n_match_cpu_physical_id with of_cpu_device_node_get to reuse the stashed value directly instead. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K. Poulose 提交于
ARM64 pmu prints an error message in event_init() when no hardware PMU is available. This is pretty annoying as it keeps printing the message for every single trial, flooding the kernel logs, unnecessarily. The return code is sufficient for the user to figure out the reason. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 7月, 2015 2 次提交
-
-
由 Jiang Liu 提交于
This is a preparatory patch for moving irq_data struct members. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Commit 0c8c0f03 ("x86/fpu, sched: Dynamically allocate 'struct fpu'") moved the thread_struct to the bottom of task_struct. As a result, the offset is now too large to be used in an immediate add on arm64 with some kernel configs: arch/arm64/kernel/entry.S: Assembler messages: arch/arm64/kernel/entry.S:588: Error: immediate out of range arch/arm64/kernel/entry.S:597: Error: immediate out of range This patch calculates the offset using an additional register instead of an immediate offset. Fixes: 0c8c0f03 ("x86/fpu, sched: Dynamically allocate 'struct fpu'") Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Olof Johansson <olof@lixom.net> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 7月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
We currently set x27 in compat_sys_sigreturn_wrapper and compat_sys_rt_sigreturn_wrapper, similarly to what we do with r8/why on 32-bit ARM, in an attempt to prevent sigreturns from being restarted. However, on arm64 we have always used pt_regs::syscallno for syscall restarting (for both native and compat tasks), and x27 is never inspected again before being overwritten in kernel_exit. This patch removes the pointless register assignments. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Currently we enable debug exceptions before reading ESR_EL1 in both el0_inv and el1_inv. If a debug exception is taken before we read ESR_EL1, the value will have been corrupted. As el*_inv is typically fatal, an intervening debug exception results in misleading debug information being logged to the console, but is not otherwise harmful. As with the other entry paths, we can use the ESR_EL1 value stashed earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1 for el1_sync), giving us better error reporting in this case. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 7月, 2015 1 次提交
-
-
由 Al Stone 提交于
For those parts of the arm64 ACPI code that need to check GICC subtables in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous BAD_MADT_ENTRY. The new macro takes into account differences in the size of the GICC subtable that the old macro did not; this caused failures even though the subtable entries are valid. Fixes: aeb823bb ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: NAl Stone <al.stone@linaro.org> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-