- 29 5月, 2014 3 次提交
-
-
由 AKASHI Takahiro 提交于
This patch allows "dynamic ftrace" if CONFIG_DYNAMIC_FTRACE is enabled. Here we can turn on and off tracing dynamically per-function base. On arm64, this is done by patching single branch instruction to _mcount() inserted by gcc -pg option. The branch is replaced to NOP initially at kernel start up, and later on, NOP to branch to ftrace_caller() when enabled or branch to NOP when disabled. Please note that ftrace_caller() is a counterpart of _mcount() in case of 'static' ftrace. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch implements arm64 specific part to support function tracers, such as function (CONFIG_FUNCTION_TRACER), function_graph (CONFIG_FUNCTION_GRAPH_TRACER) and function profiler (CONFIG_FUNCTION_PROFILER). With 'function' tracer, all the functions in the kernel are traced with timestamps in ${sysfs}/tracing/trace. If function_graph tracer is specified, call graph is generated. The kernel must be compiled with -pg option so that _mcount() is inserted at the beginning of functions. This function is called on every function's entry as long as tracing is enabled. In addition, function_graph tracer also needs to be able to probe function's exit. ftrace_graph_caller() & return_to_handler do this by faking link register's value to intercept function's return path. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Reviewed-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
walk_stackframe() calls unwind_frame(), and if walk_stackframe() is "notrace", unwind_frame() should be also "notrace". Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 5月, 2014 4 次提交
-
-
由 Geoff Levand 提交于
Change the arm64 linker script ENTRY() command to define _text as the kernel entry point. The arm64 boot protocol specifies that the kernel must be entered at the beginning of the kernel image. The existing ENTRY() command defined the symbol stext as the entry point, which emitted an incorrect entry point, but would not cause a runtime error because the existing entry code immediately jumps to stext. Signed-off-by: NGeoff Levand <geoff@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 5月, 2014 5 次提交
-
-
由 Arun KS 提交于
If one process calls sys_reboot and that process then stops other CPUs while those CPUs are within a spin_lock() region we can potentially encounter a deadlock scenario like below. CPU 0 CPU 1 ----- ----- spin_lock(my_lock) smp_send_stop() <send IPI> handle_IPI() disable_preemption/irqs while(1); <PREEMPT> spin_lock(my_lock) <--- Waits forever We shouldn't attempt to run any other tasks after we send a stop IPI to a CPU so disable preemption so that this task runs to completion. We use local_irq_disable() here for cross-arch consistency with x86. Based-on-work-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NArun KS <getarunks@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Arun KS 提交于
This patch ports most of commit 19ab428f "ARM: 7759/1: decouple CPU offlining from reboot/shutdown" by Stephen Warren from arch/arm to arch/arm64. machine_shutdown() is a hook for kexec. Add a comment saying so, since it isn't obvious from the function name. Halt, power-off, and restart have different requirements re: stopping secondary CPUs than kexec has. The former simply require the secondary CPUs to be quiesced somehow, whereas kexec requires them to be completely non-operational, so that no matter where the kexec target images are written in RAM, they won't influence operation of the secondary CPUS,which could happen if the CPUs were still executing some kind of pin loop. To this end, modify machine_halt, power_off, and restart to call smp_send_stop() directly, rather than calling machine_shutdown(). In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the kexec requirements a couple paragraphs above. Signed-off-by: NArun KS <getarunks@gmail.com> Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Larry Bassel 提交于
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525f Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
Add support for parsing the explicit topology bindings to discover the topology of the system. Since it is not currently clear how to map multi-level clusters for the scheduler all leaf clusters are presented to the scheduler at the same level. This should be enough to provide good support for current systems. Signed-off-by: NMark Brown <broonie@linaro.org> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Brown 提交于
As a legacy of the way 32 bit ARM did things the topology code uses a null topology map by default and then overwrites it by mapping cores with no information to a cluster by themselves later. In order to make it simpler to reset things as part of recovering from parse failures in firmware information directly set this configuration on init. A core will always be its own sibling so there should be no risk of confusion with firmware provided information. Signed-off-by: NMark Brown <broonie@linaro.org> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 5月, 2014 4 次提交
-
-
由 AKASHI Takahiro 提交于
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
As done in arm, this change makes it easy to confirm we invoke syscall related hooks, including syscall tracepoint, audit and seccomp which would be implemented later, in correct order. That is, undoing operations in the opposite order on exit that they were done on entry. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 2 次提交
-
-
由 Will Deacon 提交于
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2014 6 次提交
-
-
由 Catalin Marinas 提交于
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
This information is useful for instruction emulators to detect read/write and access size without having to decode the faulting instruction. The current patch exports it via sigcontext (struct esr_context) and is only valid for SIGSEGV and SIGBUS. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The synchronisation with the boot thread already happens in __cpu_up() via wait_for_completion_timeout(). In addition, __cpu_up() calls are protected by the cpu_add_remove_lock mutex and already serialised. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The hardware provides the maximum cache line size in the system via the CTR_EL0.CWG bits. This patch implements the cache_line_size() function to read such information, together with a sanity check if the statically defined L1_CACHE_BYTES is smaller than the hardware value. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 08 5月, 2014 3 次提交
-
-
由 Ard Biesheuvel 提交于
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 04 5月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit d57c33c5 (add generic fixmap.h) added (among other similar things) set_fixmap_io to deal with early ioremap of devices. More recently, commit bf4b558e (arm64: add early_ioremap support) converted the arm64 earlyprintk to use set_fixmap_io. A side effect of this conversion is that my virtual machines have stopped booting when I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel. Turns out that the new earlyprintk code doesn't care at all about sub-page offsets, and just assumes that the earlyprintk device will be page-aligned. Obviously, that doesn't play well with the above example. Further investigation shows that set_fixmap_io uses __set_fixmap instead of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that uses the latter, and to remove the superflous call to fix_to_virt (which only returns the value that set_fixmap_io has already given us). With this applied, my VMs are back in business. Tested on a Cortex-A57 platform with kvmtool as platform emulation. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Salter <msalter@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 4月, 2014 1 次提交
-
-
由 Chanho Min 提交于
Clock providers should be initialized before clocksource_of_init. If not, Clock source initialization can be fail to get the clock. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NChanho Min <chanho.min@lge.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 4月, 2014 1 次提交
-
-
由 Will Deacon 提交于
Sending a SIGTRAP to a user task after execution of a BRK instruction at EL0 is fundamental to the way in which software breakpoints work and doesn't deserve a warning to be logged in dmesg. Whilst the warning can be justified from EL1, do_debug_exception will already do the right thing, so simply remove the code altogether. Cc: Sandeepa Prabhu <sandeepa.prabhu@linaro.org> Reported-by: NKyrylo Tkachov <kyrylo.tkachov@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 4月, 2014 2 次提交
-
-
由 Mark Salter 提交于
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mark Salter 提交于
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 4月, 2014 1 次提交
-
-
由 Mark Salter 提交于
Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with: arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’: arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’ arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’ This is due to some recent arm64 perf commits with compat support: commit 23c7d70d: ARM64: perf: add support for frame pointer unwinding in compat mode commit 2ee0d7fd: ARM64: perf: add support for perf registers API Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch allows the arm64 kernel to build with and without CONFIG_COMPAT. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 3月, 2014 2 次提交
-
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the debug-monitors code in arm64 by using this latter form of callback registration. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the hw-breakpoint code in arm64 by using this latter form of callback registration. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 3月, 2014 3 次提交
-
-
由 Jingoo Han 提交于
Make smp_spin_table_cpu_postboot() static, because this function is used only in this file. Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jingoo Han 提交于
Make local symbols static, because these are used only in this file. Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jean Pihet 提交于
When profiling a 32-bit application, user space callchain unwinding using the frame pointer is performed in compat mode. The code is taken over from the AARCH32 code and adapted to work on AARCH64. Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-