- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for ARM. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Russell King <linux@arm.linux.org.uk> cc: linux-arm-kernel@lists.infradead.org
-
- 16 2月, 2012 1 次提交
-
-
由 Olof Johansson 提交于
For files that include asm/processor.h but not asm/system.h: arch/arm/mach-msm/include/mach/uncompress.h: In function 'putc': arch/arm/mach-msm/include/mach/uncompress.h:48:3: error: implicit declaration of function 'smp_mb' [-Werror=implicit-function-declaration] In this case, smp_mb() is from the cpu_relax() call in the msm putc(). It likely went uncaught when the uncompress.h change went in since the defconfig didn't enable that code path, but later changes (e76f4750: ARM: debug: arrange Kconfig options more logically) resulted in the option being on for msm_defconfig and thus exposed it. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 12月, 2011 1 次提交
-
-
由 Rob Herring 提交于
Similar to other architectures, this adds topdown mmap support in user process address space allocation policy. This allows mmap sizes greater than 2GB. This support is largely copied from MIPS and the generic implementations. The address space randomization is moved into arch_pick_mmap_layout. Tested on V-Express with ubuntu and a mmap test from here: https://bugs.launchpad.net/bugs/861296Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 3月, 2011 1 次提交
-
-
由 Will Deacon 提交于
On revisions of the Cortex-A9 prior to r2p0, the Store Buffer does not have any automatic draining mechanism and therefore a livelock may occur if an external agent continuously polls a memory location waiting to observe an update. This workaround defines cpu_relax() as smp_mb(), preventing correctly written polling loops from denying visibility of updates to memory. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 2月, 2011 1 次提交
-
-
由 Will Deacon 提交于
PTRACE_SINGLESTEP is a ptrace request designed to offer single-stepping support to userspace when the underlying architecture has hardware support for this operation. On ARM, we set arch_has_single_step() to 1 and attempt to emulate hardware single-stepping by disassembling the current instruction to determine the next pc and placing a software breakpoint on that location. Unfortunately this has the following problems: 1.) Only a subset of ARMv7 instructions are supported 2.) Thumb-2 is unsupported 3.) The code is not SMP safe We could try to fix this code, but it turns out that because of the above issues it is rarely used in practice. GDB, for example, uses PTRACE_POKETEXT and PTRACE_PEEKTEXT to manage breakpoints itself and does not require any kernel assistance. This patch removes the single-step emulation code from ptrace meaning that the PTRACE_SINGLESTEP request will return -EIO on ARM. Portable code must check the return value from a ptrace call and handle the failure gracefully. Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2010 1 次提交
-
-
由 Will Deacon 提交于
For debuggers to take advantage of the hw-breakpoint framework in the kernel, it is necessary to expose the API calls via a ptrace interface. This patch exposes the hardware breakpoints framework as a collection of virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS requests. The breakpoints are stored in the debug_info struct of the running thread. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: S. Karthikeyan <informkarthik@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2010 1 次提交
-
-
由 Will Deacon 提交于
Linux expects that if a CPU modifies a memory location, then that modification will eventually become visible to other CPUs in the system. On an ARM11MPCore processor, loads are prioritised over stores so it is possible for a store operation to be postponed if a polling loop immediately follows it. If the variable being polled indirectly depends on the outstanding store [for example, another CPU may be polling the variable that is pending modification] then there is the potential for deadlock if interrupts are disabled. This deadlock occurs in the KGDB testsuire when executing on an SMP ARM11MPCore configuration. This patch changes the definition of cpu_relax() to smp_mb() for ARMv6 cores, forcing a flushing of the write buffer on SMP systems before the next load takes place. If the Kernel is not compiled for SMP support, this will expand to a barrier() as before. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 5月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian (byte-invariant). This patch adds the core support: - setting of the BE-8 mode via the CPSR.E register for both kernel and user threads - big-endian page table walking - REV used to rotate instructions read from memory during fault processing as they are still little-endian format - Kconfig and Makefile support for BE-8. The --be8 option must be passed to the final linking stage to convert the instructions to little-endian Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2008 1 次提交
-
-
由 Lennert Buytenhek 提交于
Commit 8ec53663 ("[ARM] Improve non-executable support") added support for detecting non-executable stack binaries. One of the things it does is to make READ_IMPLIES_EXEC be set in ->personality if we are running on a CPU that doesn't support the XN ("Execute Never") page table bit or if we are running a binary that needs an executable stack. This exposed a latent bug in ARM's asm/processor.h due to which we'll end up placing the stack at a very low address, where it will bump into the heap on any application that uses significant amount of stack or heap or both, causing many interesting crashes. Fix this by testing the ADDR_LIMIT_32BIT bit in ->personality instead of testing for equality against PER_LINUX_32BIT. Reviewed-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2008 1 次提交
-
-
由 Russell King 提交于
As suggested by Andrew Morton, remove memzero() - it's not supported on other architectures so use of it is a potential build breaking bug. Since the compiler optimizes memset(x,0,n) to __memzero() perfectly well, we don't miss out on the underlying benefits of memzero(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 8月, 2008 1 次提交
-
-
由 Nicolas Pitre 提交于
With gcc 4.3 and later, a pointer that has already been dereferenced is assumed not to be null since it should have caused a segmentation fault otherwise, hence any subsequent test against NULL is optimized away. Current inline asm constraint used in the implementation of prefetch() makes gcc believe that the pointer is dereferenced even though the PLD instruction does not load any data and does not cause a segmentation fault on null pointers, which causes all sorts of interesting results when reaching the end of a linked lists for example. Let's use a better constraint to properly represent the actual usage of the pointer value. Problem reported by Chris Steel. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 8月, 2008 1 次提交
-
-
由 Russell King 提交于
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 2月, 2008 1 次提交
-
-
由 David Howells 提交于
Move STACK_TOP[_MAX] out of asm/a.out.h and into asm/processor.h as they're required whether or not A.OUT format is available. Signed-off-by: NDavid Howells <dhowells@redhat.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 12月, 2006 1 次提交
-
-
由 Nicolas Pitre 提交于
optimization The gcc manual says: |`-fdelete-null-pointer-checks' | Use global dataflow analysis to identify and eliminate useless | checks for null pointers. The compiler assumes that dereferencing | a null pointer would have halted the program. If a pointer is | checked after it has already been dereferenced, it cannot be null. | Enabled at levels `-O2', `-O3', `-Os'. Now the problem can be seen with this test case: #include <linux/prefetch.h> extern void bar(char *x); void foo(char *x) { prefetch(x); if (x) bar(x); } Because the constraint to the inline asm used in the prefetch() macro is a memory operand, gcc assumes that the asm code does dereference the pointer and the delete-null-pointer-checks optimization kicks in. Inspection of generated assembly for the above example shows that bar() is indeed called unconditionally without any test on the value of x. Of course in the prefetch case there is no real dereference and it cannot be assumed that a null pointer would have been caught at that point. This causes kernel oopses with constructs like hlist_for_each_entry() where the list's 'next' content is prefetched before the pointer is tested against NULL, and only when gcc feels like applying this optimization which doesn't happen all the time with more complex code. It appears that the way to prevent delete-null-pointer-checks optimization to occur in this case is to make prefetch() into a static inline function instead of a macro. At least this is what is done on x86_64 where a similar inline asm memory operand is used (I presume they would have seen the same problem if it didn't work) and resulting code for the above example confirms that. An alternative would consist of replacing the memory operand by a register operand containing the pointer, and use the addressing mode explicitly in the asm template. But that would be less optimal than an offsettable memory reference. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 11月, 2006 1 次提交
-
-
由 Russell King 提交于
These files want to provide/access ELF hwcap information, so should be including asm/elf.h rather than asm/procinfo.h Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 1月, 2006 1 次提交
-
-
由 Hyok S. Choi 提交于
This patch supports start_thread in nommu mode which requires the base index register. Signed-off-by: NHyok S. Choi <hyok.choi@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 1月, 2006 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 05 5月, 2005 1 次提交
-
-
由 Russell King 提交于
Various places in the ARM kernel implicitly assumed that kernel stacks are always 8K due to hard coded constants. Replace these constants with definitions. Correct the allowable range of kernel stack pointer values within the allocation. Arrange for the entire kernel stack to be zeroed, not just the upper 4K if CONFIG_DEBUG_STACK_USAGE is set. Signed-off-by: NRussell King <rmk@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-