- 08 9月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
When zeroing an I/O location, the current accessors are forced to allocate a temporary register to store the zero for the write. By tweaking the assembly constraints, we can allow the compiler to use the zero register directly in such cases, and save some juggling. Compiling a representative kernel configuration with GCC 6 shows that 2.3KB worth of code can be wasted just on that! text data bss dec hex filename 13316776 3248256 18176769 34741801 2121e29 vmlinux.o.new 13319140 3248256 18176769 34744165 2122765 vmlinux.o.old Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 9月, 2016 4 次提交
-
-
由 Catalin Marinas 提交于
This patch adds static keys transparently for all the cpu_hwcaps features by implementing an array of default-false static keys and enabling them when detected. The cpus_have_cap() check uses the static keys if the feature being checked is a constant, otherwise the compiler generates the bitmap test. Because of the early call to static_branch_enable() via check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels are initialised in cpuinfo_store_boot_cpu(). Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The static key API is currently designed around single variable definitions. There are cases where an array of static keys is desirable, so extend the API to allow this rather than using the internal static key implementation directly. Cc: Jason Baron <jbaron@akamai.com> Cc: Jonathan Corbet <corbet@lwn.net> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Suggested-by: NDave P Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
There is only fixup_init() in mm.h , and it is only called in free_initmem(), so move the codes from fixup_init() into free_initmem(), then drop fixup_init() and mm.h. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
As declared by the chief penguin, and enforced by the NO_IRQ brigade, IRQ0 doesn't exist, and is considered as an error (no irq). Unfortunately, the arm_pmu driver still considers it as valid in a large number of cases. Let's fix this. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 9月, 2016 3 次提交
-
-
由 Pratyush Anand 提交于
Currently, enabling stacktrace of a kprobe events generates warning: echo stacktrace > /sys/kernel/debug/tracing/trace_options echo "p xhci_irq" > /sys/kernel/debug/tracing/kprobe_events echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable save_stack_trace_regs() not implemented yet. ------------[ cut here ]------------ WARNING: CPU: 1 PID: 0 at ../kernel/stacktrace.c:74 save_stack_trace_regs+0x3c/0x48 Modules linked in: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-rc4-dirty #5128 Hardware name: ARM Juno development board (r1) (DT) task: ffff800975dd1900 task.stack: ffff800975ddc000 PC is at save_stack_trace_regs+0x3c/0x48 LR is at save_stack_trace_regs+0x3c/0x48 pc : [<ffff000008126c64>] lr : [<ffff000008126c64>] pstate: 600003c5 sp : ffff80097ef52c00 Call trace: save_stack_trace_regs+0x3c/0x48 __ftrace_trace_stack+0x168/0x208 trace_buffer_unlock_commit_regs+0x5c/0x7c kprobe_trace_func+0x308/0x3d8 kprobe_dispatcher+0x58/0x60 kprobe_breakpoint_handler+0xbc/0x18c brk_handler+0x50/0x90 do_debug_exception+0x50/0xbc This patch implements save_stack_trace_regs(), so that stacktrace of a kprobe events can be obtained. After this patch, there is no warning and we can see the stacktrace for kprobe events in trace buffer. more /sys/kernel/debug/tracing/trace <idle>-0 [004] d.h. 1356.000496: p_xhci_irq_0:(xhci_irq+0x0/0x9ac) <idle>-0 [004] d.h. 1356.000497: <stack trace> => xhci_irq => __handle_irq_event_percpu => handle_irq_event_percpu => handle_irq_event => handle_fasteoi_irq => generic_handle_irq => __handle_domain_irq => gic_handle_irq => el1_irq => arch_cpu_idle => default_idle_call => cpu_startup_entry => secondary_start_kernel => Tested-by: NDavid A. Long <dave.long@linaro.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Commit b5fe2429 ("arm64: kernel: fix style issues in sleep.S") changed the linkage of _cpu_resume() to local, even though the symbol is also referenced from hibernate.c. So revert this change. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
The code that provides /dev/mem uses xlate_dev_mem_{k,}ptr() to avoid making a cachable mapping of a non-cachable area on ia64. On arm64 we do this via phys_mem_access_prot() instead, but provide dummy versions of xlate_dev_mem_{k,}ptr(). These are the same as those in asm-generic/io.h, which we include from asm/io.h Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 9月, 2016 9 次提交
-
-
由 Will Deacon 提交于
Single-step traps to userspace (e.g. via ptrace) are expected to use the TRAP_TRACE for the si_code field of the siginfo, as opposed to TRAP_HWBRPT that we report currently. Fix the reported value, which has no effect on existing and legacy builds of GDB. Reported-by: NYao Qi <yao.qi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Now that the only remaining occurrences of the use of callee saved registers are on the primary boot path, add a comment to the code which register is used for what. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Instead of stashing the value of the link register in x28 before setting up the stack and calling into C code, create an ordinary PCS compatible stack frame so that we can push the return address onto the stack. Since exception handlers require a stack as well, assign the stack pointer register before installing the vector table. Note that this accounts for the difference between THREAD_START_SP and THREAD_SIZE, given that the stack pointer is always decremented before calling into any C code. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Keeping __PHYS_OFFSET in x24 is actually less clear than simply taking the value of __PHYS_OFFSET using an adrp instruction in the three places that we need it. So change that. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Using x27 for passing to __enable_mmu what is essentially the return address makes the code look more complicated than it needs to be. So switch to x30/lr, and update the secondary and cpu_resume call sites to simply call __enable_mmu as an ordinary function, with a bl instruction. This requires the callers to be covered by .idmap.text. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The KASLR processing is only used by the primary boot path, and complements the processing that takes place in __primary_switch(). Move the two parts together, to make the code easier to understand. Also, fix up a minor whitespace issue. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [will: fixed conflict with -rc3 due to lack of fd363bd4] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The function el2_setup() passes its return value in register w20, and in the two cases where the caller actually cares about this return value, it is passed into set_cpu_boot_mode_flag() [almost] directly, which expects its input in w20 as well. So there is no reason to use a 'special' callee saved register here, but we can simply follow the PCS for return value and first argument, respectively. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This fixes a number of style issues in sleep.S. No functional changes are intended: - replace absolute literal references with relative references in __cpu_suspend_enter(), which executes from its virtual address - replace explicit lr assignment plus branch with bl in cpu_resume(), which aligns it with stext() and secondary_startup() - don't export _cpu_resume() - use adr_l for mpidr_hash reference, and fix the incorrect accompanying comment, which has been out of date since commit cabe1c81 ("arm64: Change cpu_resume() to enable mmu early then access sleep_sp by va") - replace leading spaces with tabs, and add a bit of whitespace for readability Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Vladimir Murzin 提交于
Commit e19a6ee2 ("arm64: kernel: Save and restore UAO and addr_limit on exception entry") states that exception handler inherits the original PSTATE.UAO value, so UAO needes to be reset explicitly. However, ARM 8.2 Extension documentation says: PSTATE.UAO is copied to SPSR_ELx.UAO and is then set to 0 on an exception taken from AArch64 to AArch64 so hardware already does the right thing. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 9月, 2016 4 次提交
-
-
由 Will Deacon 提交于
The arm64 debug monitor initialisation code uses a CPU hotplug notifier to clear the OS lock when CPUs come online. This patch converts the code to the new hotplug mechanism. Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The arm64 hw_breakpoint implementation uses a CPU hotplug notifier to reset the {break,watch}point registers when CPUs come online. This patch converts the code to the new hotplug mechanism, whilst moving the invocation earlier to remove the need to disable IRQs explicitly in the driver (which could cause havok if we trip a watchpoint in an IRQ handler whilst restoring the debug register state). Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 zijun_hu 提交于
remove duplicate macro __KERNEL__ check Signed-off-by: Nzijun_hu <zijun_hu@htc.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When TIF_SINGLESTEP is set for a task, the single-step state machine is enabled and we must take care not to reset it to the active-not-pending state if it is already in the active-pending state. Unfortunately, that's exactly what user_enable_single_step does, by unconditionally setting the SS bit in the SPSR for the current task. This causes failures in the GDB testsuite, where GDB ends up missing expected step traps if the instruction being stepped generates another trap, e.g. PTRACE_EVENT_FORK from an SVC instruction. This patch fixes the problem by preserving the current state of the stepping state machine when TIF_SINGLESTEP is set on the current thread. Cc: <stable@vger.kernel.org> Reported-by: NYao Qi <yao.qi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 31 8月, 2016 6 次提交
-
-
由 Ard Biesheuvel 提交于
Expose the arm64_ftr_reg struct covering CTR_EL0 outside of cpufeature.o so that other code can refer to it directly (i.e., without performing the binary search) Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Constify the arm64_ftr_regs array, by moving the mutable arm64_ftr_reg fields out of the array itself. This also streamlines the bsearch, since the entire array can be covered by fewer cachelines. Moving the payload out of the array also allows us to have special explicitly defined struct instance in case other code needs to refer to it directly. Note that this replaces the runtime sorting of the array with a runtime BUG() check whether the array is sorted correctly in the code. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The arm64_ftr_bits structures are never modified, so make them read-only. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
The UDBG_UNDEFINED/SYSCALL/BADABORT/SEGV are only used to show verbose user fault messages in arm, not arm64, drop them. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kim Phillips 提交于
Any arm64 based parts that have cache aliasing issues can set it manually. Apparently dragged in from ARM(32) defaults in commit 8c2c3df3 "arm64: Build infrastructure". Signed-off-by: NKim Phillips <kim.phillips@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Michal Marek 提交于
The make rpm target depends on proper UTS_MACHINE definition. Also, use the variable in arch/arm64/kernel/setup.c, so that it's not accidentally removed in the future. Reported-and-tested-by: NFabian Vogt <fvogt@suse.com> Signed-off-by: NMichal Marek <mmarek@suse.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 8月, 2016 11 次提交
-
-
由 Will Deacon 提交于
Cortex-A53 erratum 843419 is worked around by the linker, although it is a configure-time option to GCC as to whether ld is actually asked to apply the workaround or not. This patch ensures that we pass --fix-cortex-a53-843419 to the linker when both CONFIG_ARM64_ERRATUM_843419=y and the linker supports the option. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Now that we use the MPIDR to resume on the same CPU that we hibernated on, we no longer need to refuse to hibernate if the boot cpu is offline. (Which we can't possibly know if kexec causes logical CPUs to be renumbered). This reverts commit 1fe492ce. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
disable_nonboot_cpus() assumes that the lowest numbered online CPU is the boot CPU, and that this is the correct CPU to run any power management code on. On arm64 CPU0 can be taken offline. For hibernate/resume this means we may hibernate on a CPU other than CPU0. If the system is rebooted with kexec 'CPU0' will be assigned to a different CPU. This complicates hibernate/resume as now we can't trust the CPU numbers. We currently forbid hibernate if CPU0 has been hotplugged out to avoid this situation without kexec. Save the MPIDR of the CPU we hibernated on in the hibernate arch-header, use hibernate_resume_nonboot_cpu_disable() to direct which CPU we should resume on based on the MPIDR of the CPU we hibernated on. This allows us to hibernate/resume on any CPU, even if the logical numbers have been shuffled by kexec. Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
disable_nonboot_cpus() assumes that the lowest numbered online CPU is the boot CPU, and that this is the correct CPU to run any power management code on. On x86 this is always correct, as CPU0 cannot (easily) by taken offline. On arm64 CPU0 can be taken offline. For hibernate/resume this means we may hibernate on a CPU other than CPU0. If the system is rebooted with kexec 'CPU0' will be assigned to a different physical CPU. This complicates hibernate/resume as now we can't trust the CPU numbers. Arch code can find the correct physical CPU, and ensure it is online before resume from hibernate begins, but also needs to influence disable_nonboot_cpus()s choice of CPU. Rename disable_nonboot_cpus() as freeze_secondary_cpus() and add an argument indicating which CPU should be left standing. Follow the logic in migrate_to_reboot_cpu() to use the lowest numbered online CPU if the requested CPU is not online. Add disable_nonboot_cpus() as an inline function that has the existing behaviour. Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Follow the example set by x86 in commit 9ccaf77c ("x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option"), and make these protections a fundamental security feature rather than an opt-in. This also results in a minor code simplification. For those rare cases when users wish to disable this protection (e.g. for debugging), this can be done by passing 'rodata=off' on the command line. As DEBUG_RODATA_ALIGN is only intended to address a performance/memory tradeoff, and does not affect correctness, this is left user-selectable. DEBUG_MODULE_RONX is also left user-selectable until the core code provides a boot-time option to disable the protection for debugging use-cases. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NKees Cook <keescook@chromium.org> Acked-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
Kdump(kexec-tools) parses /proc/iomem to identify all the memory regions on the system. Since the current kernel names "nomap" regions, like UEFI runtime services code/data, as "System RAM," kexec-tools sets up elf core header to include them in a crash dump file (/proc/vmcore). Then crash dump kernel parses UEFI memory map again, re-marks those regions as "nomap" and does not create a memory mapping for them unlike the other areas of System RAM. In this case, copying /proc/vmcore through copy_oldmem_page() on crash dump kernel will end up with a kernel abort, as reported in [1]. This patch names all the "nomap" regions explicitly as "reserved" so that we can exclude them from a crash dump file. acpi_os_ioremap() must also be modified because those regions have WB attributes [2]. Apart from kdump, this change also matches x86's use of acpi (and /proc/iomem). [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448186.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/450089.htmlReviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
DEBUG_PAGEALLOC removes the valid bit of page table entries to prevent any access to unallocated memory. Hibernate uses this as a hint that those pages don't need to be saved/restored. This patch adds the kernel_page_present() function it uses. hibernate.c copies the resume kernel's linear map for use during restore. Add _copy_pte() to fill-in the holes made by DEBUG_PAGEALLOC in the resume kernel, so we can restore data the original kernel had at these addresses. Finally, DEBUG_PAGEALLOC means the linear-map alias of KERNEL_START to KERNEL_END may have holes in it, so we can't lazily clean this whole area to the PoC. Only clean the new mmuoff region, and the kernel/kvm idmaps. This reverts commit da24eb1f. Reported-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Resume from hibernate needs to clean any text executed by the kernel with the MMU off to the PoC. Collect these functions together into the .idmap.text section as all this code is tightly coupled and also needs the same cleaning after resume. Data is more complicated, secondary_holding_pen_release is written with the MMU on, clean and invalidated, then read with the MMU off. In contrast __boot_cpu_mode is written with the MMU off, the corresponding cache line is invalidated, so when we read it with the MMU on we don't get stale data. These cache maintenance operations conflict with each other if the values are within a Cache Writeback Granule (CWG) of each other. Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write, the linker script ensures mmuoff.data.write section is aligned to the architectural maximum CWG of 2KB. Signed-off-by: NJames Morse <james.morse@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Each time new section markers are added, kernel/vmlinux.ld.S is updated, and new extern char __start_foo[] definitions are scattered through the tree. Create asm/include/sections.h to collect these definitions (and include the existing asm-generic version). Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. However, the kernel running on a CPU implementation without User Access Override (ARMv8.2 onwards) can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_ng() macro. VM_READ is also checked now for page faults. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Pratyush Anand 提交于
Whenever we are hitting a kprobe from a none-kprobe debug exception handler, we hit an infinite occurrences of "Unexpected kernel single-step exception at EL1" PSTATE.D is debug exception mask bit. It is set whenever we enter into an exception mode. When it is set then Watchpoint, Breakpoint, and Software Step exceptions are masked. However, software Breakpoint Instruction exceptions can never be masked. Therefore, if we ever execute a BRK instruction, irrespective of D-bit setting, we will be receiving a corresponding breakpoint exception. For example: - We are executing kprobe pre/post handler, and kprobe has been inserted in one of the instruction of a function called by handler. So, it executes BRK instruction and we land into the case of KPROBE_REENTER. (This case is already handled by current code) - We are executing uprobe handler or any other BRK handler such as in WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we enter into kprobe breakpoint handler,from another BRK handler.(This case is not being handled currently) In all such cases kprobe breakpoint exception will be raised when we were already in debug exception mode. SPSR's D bit (bit 9) shows the value of PSTATE.D immediately before the exception was taken. So, in above example cases we would find it set in kprobe breakpoint handler. Single step exception will always be followed by a kprobe breakpoint exception.However, it will only be raised gracefully if we clear D bit while returning from breakpoint exception. If D bit is set then, it results into undefined exception and when it's handler enables dbg then single step exception is generated, however it will never be handled(because address does not match and therefore treated as unexpected). This patch clears D-flag unconditionally in setup_singlestep, so that we can always get single step exception correctly after returning from breakpoint exception. Additionally, it also removes D-flag set statement for KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will always take place in a debug exception state. So, D-flag will already be set in this case. Acked-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 8月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, x25 and x26 hold the physical addresses of idmap_pg_dir and swapper_pg_dir, respectively, when running early boot code. But having registers with 'global' scope in files that contain different sections with different lifetimes, and that are called by different CPUs at different times is a bit messy, especially since stashing the values does not buy us anything in terms of code size or clarity. So simply replace each reference to x25 or x26 with an adrp instruction referring to idmap_pg_dir or swapper_pg_dir directly. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jisheng Zhang 提交于
These objects are set during initialization, thereafter are read only. Previously I only want to mark vdso_pages, vdso_spec, vectors_page and cpu_ops as __read_mostly from performance point of view. Then inspired by Kees's patch[1] to apply more __ro_after_init for arm, I think it's better to mark them as __ro_after_init. What's more, I find some more objects are also read only after init. So apply __ro_after_init to all of them. This patch also removes global vdso_pagelist and tries to clean up vdso_spec[] assignment code. [1] http://www.spinics.net/lists/arm-kernel/msg523188.htmlAcked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-