- 24 2月, 2013 1 次提交
-
-
由 Russell King 提交于
Three's no need to have code initializing this by hand; it's more efficient to initialize the constant structure members directly. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 11月, 2012 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 13 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 10月, 2012 1 次提交
-
-
由 fwu 提交于
1. On ARM platform, "nohlt" can be used to prevent core from idle process, returning immediately. 2. There are two interfaces, exported for other modules, named "disable_hlt" and "enable_hlt" are used to enable/disable the cpuidle mechanism by increasing/decreasing "hlt_counter". Disable_hlt and enable_hlt are paired operation, when you first call disable_hlt and then enable_hlt, the semantics are right. 3. There is no obvious constraint to prevent user(driver/module) code to prevent the case that enable_hlt is ahead of disable_hlt, which is a fatal operation on kernel state change from user, and there is no any WARNING or notification if the case happens in current kernel code. This patch aims to report BUG when the case happens, just like what the kernel do when enable_irq is ahead of disable_irq. Link: https://patchwork.kernel.org/patch/1527881/Signed-off-by: Nfwu <fwu@marvell.com> Signed-off-by: NYiLu Mao <ylmao@marvell.com> Signed-off-by: NNing Jiang <ning.jiang@marvell.com> Acked-by: Nicolas Pitre Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 10月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 10月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 8月, 2012 1 次提交
-
-
由 Bryan Wu 提交于
Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: NBryan Wu <bryan.wu@canonical.com>
-
- 31 7月, 2012 1 次提交
-
-
由 Shawn Guo 提交于
The CPU will endlessly spin at the end of machine_halt and machine_restart calls. However, this will lead to a soft lockup warning after about 20 seconds, if CONFIG_LOCKUP_DETECTOR is enabled, as system timer is still alive. Disable interrupt before going to spin endlessly, so that the lockup warning will never be seen. Cc: <stable@vger.kernel.org> Reported-by: NMarek Vasut <marex@denx.de> Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 5月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
cpuidle uses a generic function now. Remove the unused code. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Link: http://lkml.kernel.org/r/20120507175652.260797846@linutronix.de
-
- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for ARM. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Russell King <linux@arm.linux.org.uk> cc: linux-arm-kernel@lists.infradead.org
-
- 24 3月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The current user mapping for the vectors page is inserted as a `horrible hack vma' into each task via arch_setup_additional_pages. This causes problems with the MM subsystem and vm_normal_page, as described here: https://lkml.org/lkml/2012/1/14/55 Following the suggestion from Hugh in the above thread, this patch uses the gate_vma for the vectors user mapping, therefore consolidating the horrible hack VMAs into one. Acked-and-Tested-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jason Baron 提交于
The motivation for this patchset was that I was looking at a way for a qemu-kvm process, to exclude the guest memory from its core dump, which can be quite large. There are already a number of filter flags in /proc/<pid>/coredump_filter, however, these allow one to specify 'types' of kernel memory, not specific address ranges (which is needed in this case). Since there are no more vma flags available, the first patch eliminates the need for the 'VM_ALWAYSDUMP' flag. The flag is used internally by the kernel to mark vdso and vsyscall pages. However, it is simple enough to check if a vma covers a vdso or vsyscall page without the need for this flag. The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new 'VM_NODUMP' flag, which can be set by userspace using new madvise flags: 'MADV_DONTDUMP', and unset via 'MADV_DODUMP'. The core dump filters continue to work the same as before unless 'MADV_DONTDUMP' is set on the region. The qemu code which implements this features is at: http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch In my testing the qemu core dump shrunk from 383MB -> 13MB with this patch. I also believe that the 'MADV_DONTDUMP' flag might be useful for security sensitive apps, which might want to select which areas are dumped. This patch: The VM_ALWAYSDUMP flag is currently used by the coredump code to indicate that a vma is part of a vsyscall or vdso section. However, we can determine if a vma is in one these sections by checking it against the gate_vma and checking for a non-NULL return value from arch_vma_name(). Thus, freeing a valuable vma bit. Signed-off-by: NJason Baron <jbaron@redhat.com> Acked-by: NRoland McGrath <roland@hack.frob.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 3月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Coccinelle based conversion. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-24swm5zut3h9c4a6s46x8rws@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 1月, 2012 2 次提交
-
-
由 Nicolas Pitre 提交于
Now that all implementations of arch_idle() are equivalent to cpu_do_idle() we can just use the later directly and stop including mach/system.h. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-and-tested-by: NJamie Iles <jamie@jamieiles.com> Acked-by: NTony Lindgren <tony@atomide.com> Tested-by: NStephen Warren <swarren@nvidia.com>
-
由 Nicolas Pitre 提交于
Let's factor out the need_resched() check instead of having it duplicated in every pm_idle implementations to avoid inconsistencies (omap2_pm_idle is missing it already). The forceful re-enablement of IRQs after pm_idle has returned can go. The warning certainly doesn't trigger for existing users. To get rid of the pm_idle calling convention oddity, let's introduce arm_pm_idle() allowing for the local_irq_enable() to be factored out from SOC specific implementations. The default pm_idle function becomes a wrapper for arm_pm_idle and it takes care of enabling IRQs closer to where they are initially disabled. And finally move the comment explaining the reason for that turning off of IRQs to a more proper location. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-and-tested-by: NJamie Iles <jamie@jamieiles.com>
-
- 05 1月, 2012 1 次提交
-
-
由 Russell King 提交于
Remove the now empty arch_reset() from all the mach/system.h includes, and remove its callsite. Remove arm_machine_restart() as this function no longer does anything useful. For samsung platforms, remove the include of mach/system-reset.h and plat/system-reset.h from their respective mach/system.h headers as these just define their arch_reset functions. As a result, the s3c2410 and plat-samsung system-reset.h files are no longer referenced, so remove these files entirely. Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NJamie Iles <jamie@jamieiles.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 12月, 2011 1 次提交
-
-
由 Will Deacon 提交于
Tools such as kexec and CPU hotplug require a way to reset the processor and branch to some code in physical space. This requires various bits of jiggery pokery with the caches and MMU which, when it goes wrong, tends to lock up the system. This patch fleshes out the soft_restart implementation so that it branches to the reset code using the identity mapping. This requires us to change to a temporary stack, held within the kernel image as a static array, to avoid conflicting with the new view of memory. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 12月, 2011 3 次提交
-
-
由 Frederic Weisbecker 提交于
Those two APIs were provided to optimize the calls of tick_nohz_idle_enter() and rcu_idle_enter() into a single irq disabled section. This way no interrupt happening in-between would needlessly process any RCU job. Now we are talking about an optimization for which benefits have yet to be measured. Let's start simple and completely decouple idle rcu and dyntick idle logics to simplify. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: NJosh Triplett <josh@joshtriplett.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Frederic Weisbecker 提交于
It is assumed that rcu won't be used once we switch to tickless mode and until we restart the tick. However this is not always true, as in x86-64 where we dereference the idle notifiers after the tick is stopped. To prepare for fixing this, add two new APIs: tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu(). If no use of RCU is made in the idle loop between tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch must instead call the new *_norcu() version such that the arch doesn't need to call rcu_idle_enter() and rcu_idle_exit(). Otherwise the arch must call tick_nohz_enter_idle() and tick_nohz_exit_idle() and also call explicitly: - rcu_idle_enter() after its last use of RCU before the CPU is put to sleep. - rcu_idle_exit() before the first use of RCU after the CPU is woken up. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: David Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Frederic Weisbecker 提交于
The tick_nohz_stop_sched_tick() function, which tries to delay the next timer tick as long as possible, can be called from two places: - From the idle loop to start the dytick idle mode - From interrupt exit if we have interrupted the dyntick idle mode, so that we reprogram the next tick event in case the irq changed some internal state that requires this action. There are only few minor differences between both that are handled by that function, driven by the ts->inidle cpu variable and the inidle parameter. The whole guarantees that we only update the dyntick mode on irq exit if we actually interrupted the dyntick idle mode, and that we enter in RCU extended quiescent state from idle loop entry only. Split this function into: - tick_nohz_idle_enter(), which sets ts->inidle to 1, enters dynticks idle mode unconditionally if it can, and enters into RCU extended quiescent state. - tick_nohz_irq_exit() which only updates the dynticks idle mode when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called). To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed into tick_nohz_idle_exit(). This simplifies the code and micro-optimize the irq exit path (no need for local_irq_save there). This also prepares for the split between dynticks and rcu extended quiescent state logics. We'll need this split to further fix illegal uses of RCU in extended quiescent states in the idle loop. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: David Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 21 11月, 2011 2 次提交
-
-
由 Will Deacon 提交于
This patch implements a workaround for PL310 erratum 769419. On revisions of the PL310 prior to r3p2, the Store Buffer does not automatically drain. This can cause normal, non-cacheable writes to be retained when the memory system is idle, leading to suboptimal I/O performance for drivers using coherent DMA. This patch adds an optional wmb() call to the cpu_idle loop. On systems with an outer cache, this causes an explicit flush of the store buffer. Cc: stable@vger.kernel.org Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We only need to set the system up for a soft-restart if we're going to be doing a soft-restart. Provide a new function (soft_restart()) which does the setup and final call for this, and make platforms use it. Eliminate the call to setup_restart() from the default handler. This means that platforms arch_reset() function is no longer called with the page tables prepared for a soft-restart, and caches will still be enabled. Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Acked-by: NSascha Hauer <s.hauer@pengutronix.de> Acked-by: NViresh Kumar <viresh.kumar@st.com> Acked-by: NKrzysztof Ha■asa <khc@pm.waw.pl> Acked-by: NPaul Mundt <lethal@linux-sh.org> Acked-by: NRichard Purdie <richard.purdie@linuxfoundation.org> Acked-by: NWan ZongShun <mcuos.com@gmail.com> Acked-by: NEric Miao <eric.y.miao@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 11月, 2011 2 次提交
-
-
由 Russell King 提交于
setup_mm_for_reboot() doesn't make use of its argument, so remove it. Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the failure to reboot into machine_restart() to always catch this condition, even if a platform decides to hook the restarting via arm_pm_restart(). Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
Many of the core ARM kernel files are not modules, but just including module.h for exporting symbols. Now these files can use the lighter footprint export.h for this role. There are probably lots more, but ARM files of mach-* and plat-* don't get coverage via a simple yesconfig build. They will have to be cleaned up and tested via using their respective configs. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 17 10月, 2011 1 次提交
-
-
由 Laura Abbott 提交于
Currently, show_regs calls __backtrace which does nothing if CONFIG_FRAME_POINTER is not set. Switch to dump_stack which handles both CONFIG_FRAME_POINTER and CONFIG_ARM_UNWIND correctly. __backtrace is now superseded by dump_stack in general and show_regs was the last caller so remove __backtrace as well. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 8月, 2011 1 次提交
-
-
由 David Brown 提交于
Commit a0bfa137 mispells cpuidle_idle_call() on ARM and SH code. Fix this to be consistent. Cc: Kevin Hilman <khilman@deeprootsystems.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: x86@kernel.org Cc: Len Brown <len.brown@intel.com> Signed-off-by: NDavid Brown <davidb@codeaurora.org> [ Also done by Mark Brown - th ebug has been around forever, and was noticed in -next, but the idle tree never picked it up. Bad bad bad ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 8月, 2011 1 次提交
-
-
由 Len Brown 提交于
cpuidle users should call cpuidle_call_idle() directly rather than via (pm_idle)() function pointer. Architecture may choose to continue using (pm_idle)(), but cpuidle need not depend on it: my_arch_cpu_idle() ... if(cpuidle_call_idle()) pm_idle(); cc: Kevin Hilman <khilman@deeprootsystems.com> cc: Paul Mundt <lethal@linux-sh.org> cc: x86@kernel.org Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 11 4月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds THREAD_NOTIFY_COPY for calling registered handlers during the copy_thread() function call. It also changes the VFP handler to use a switch statement rather than if..else and ignore this event. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 1月, 2011 1 次提交
-
-
由 Will Deacon 提交于
When running without an MMU, we do not need to install a mapping for the vectors page. Attempting to do so causes a compile-time error because install_special_mapping is not defined. This patch adds compile-time guards to the vector mapping functions so that we can build nommu configurations once more. Acked-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 10月, 2010 1 次提交
-
-
由 Kevin Hilman 提交于
In order for CPUidle to work on SMP systems, an implementation of cpu_idle_wait() is needed. This patch duplicates the x86 implementation of cpu_idle_wait() for ARM. Tested-by: NColin Cross <ccross@android.com> Signed-off-by: NKevin Hilman <khilman@deeprootsystems.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 10月, 2010 1 次提交
-
-
由 Nicolas Pitre 提交于
The kernel makes the high vector page visible to user space. This page contains (amongst others) small code segments that can be executed in user space. Make this page visible through ptrace and /proc/<pid>/mem in order to let gdb perform code parsing needed for proper unwinding. For example, the ERESTART_RESTARTBLOCK handler actually has a stack frame -- it returns to a PC value stored on the user's stack. To unwind after a "sleep" system call was interrupted twice, GDB would have to recognize this situation and understand that stack frame layout -- which it currently cannot do. We could fix this by hard-coding addresses in the vector page range into GDB, but that isn't really portable as not all of those addresses are guaranteed to remain stable across kernel releases. And having the gdb process make an exception for this page and get content from its own address space for it looks strange, and it is not future proof either. Being located above PAGE_OFFSET, this vma cannot be deleted by user space code. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 08 9月, 2010 1 次提交
-
-
由 Will Deacon 提交于
For debuggers to take advantage of the hw-breakpoint framework in the kernel, it is necessary to expose the API calls via a ptrace interface. This patch exposes the hardware breakpoints framework as a collection of virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS requests. The breakpoints are stored in the debug_info struct of the running thread. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: S. Karthikeyan <informkarthik@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 7月, 2010 2 次提交
-
-
由 Russell King 提交于
x86 calls machine_shutdown() from the various machine_*() calls which take the machine down ready for halting, restarting, etc, and uses this to bring the system safely to a point where those actions can be performed. Such actions are stopping the secondary CPUs. So, change the ARM implementation of these to reflect what x86 does. This solves kexec problems on ARM SMP platforms, where the secondary CPUs were left running across the kexec call. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
All implementations of cpu_proc_fin() start by disabling interrupts and then flush caches. Rather than have every processors proc_fin() implementation do this, move it out into generic code - and move the cache flush past setup_mm_for_reboot() (so it can benefit from having caches still enabled.) This allows cpu_proc_fin() to become independent of the L1/L2 cache types, and eventually move the L2 cache flushing into the L2 support code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 7月, 2010 1 次提交
-
-
由 Russell King 提交于
CPU: Testing write buffer coherency: ok ------------[ cut here ]------------ WARNING: at kernel/lockdep.c:3145 check_flags+0xcc/0x1dc() Modules linked in: [<c0035120>] (unwind_backtrace+0x0/0xf8) from [<c0355374>] (dump_stack+0x20/0x24) [<c0355374>] (dump_stack+0x20/0x24) from [<c0060c04>] (warn_slowpath_common+0x58/0x70) [<c0060c04>] (warn_slowpath_common+0x58/0x70) from [<c0060c3c>] (warn_slowpath_null+0x20/0x24) [<c0060c3c>] (warn_slowpath_null+0x20/0x24) from [<c008f224>] (check_flags+0xcc/0x1dc) [<c008f224>] (check_flags+0xcc/0x1dc) from [<c00945dc>] (lock_acquire+0x50/0x140) [<c00945dc>] (lock_acquire+0x50/0x140) from [<c0358434>] (_raw_spin_lock+0x50/0x88) [<c0358434>] (_raw_spin_lock+0x50/0x88) from [<c00fd114>] (set_task_comm+0x2c/0x60) [<c00fd114>] (set_task_comm+0x2c/0x60) from [<c007e184>] (kthreadd+0x30/0x108) [<c007e184>] (kthreadd+0x30/0x108) from [<c0030104>] (kernel_thread_exit+0x0/0x8) ---[ end trace 1b75b31a2719ed1c ]--- possible reason: unannotated irqs-on. irq event stamp: 3 hardirqs last enabled at (2): [<c0059bb0>] finish_task_switch+0x48/0xb0 hardirqs last disabled at (3): [<c002f0b0>] ret_slow_syscall+0xc/0x1c softirqs last enabled at (0): [<c005f3e0>] copy_process+0x394/0xe5c softirqs last disabled at (0): [<(null)>] (null) Fix this by ensuring that the lockdep interrupt state is manipulated in the appropriate places. We essentially treat userspace as an entirely separate environment which isn't relevant to lockdep (lockdep doesn't monitor userspace.) We don't tell lockdep that IRQs will be enabled in that environment. Instead, when creating kernel threads (which is a rare event compared to entering/leaving userspace) we have to update the lockdep state. Do this by starting threads with IRQs disabled, and in the kthread helper, tell lockdep that IRQs are enabled, and enable them. This provides lockdep with a consistent view of the current IRQ state in kernel space. This also revert portions of 0d928b0b which didn't fix the problem. Tested-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 6月, 2010 2 次提交
-
-
由 Nicolas Pitre 提交于
This is the very basic stuff without the changing canary upon task switch yet. Just the Kconfig option and a constant canary value initialized at boot time. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
由 Nicolas Pitre 提交于
For this feature to take effect, CONFIG_COMPAT_BRK must be turned off. This can safely be turned off for any EABI user space versions. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 21 4月, 2010 1 次提交
-
-
由 Russell King 提交于
/tmp/ccJ3ssZW.s: Assembler messages: /tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077' This is caused because: .section .data .section .text .section .text .previous does not return us to the .text section, but the .data section; this makes use of .previous dangerous if the ordering of previous sections is not known. Fix up the other users of .previous; .pushsection and .popsection are a safer pairing to use than .section and .previous. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-