- 12 2月, 2011 1 次提交
-
-
由 Russell King 提交于
This allows the cache/processor/fault glue to be more easily used from assembler code. Tested on Assabet and Tegra 2. Tested-by: NColin Cross <ccross@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Add a function to set the SCU low-power mode for SMP CPUs. This centralizes this functionality rather than having to expose the SCU register definitions to each platform. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Allow non-ARM SMP processors to use the SMP_ON_UP feature. CPUs supporting SMP must have the new CPU ID format, so check for this first. Then check for ARM11MPCore, which fails the MPIDR check. Lastly check the MPIDR reports multiprocessing extensions and that the CPU is part of a multiprocessing system. Cc: <stable@kernel.org> Reported-and-Tested-by: NStephen Boyd <sboyd@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 1月, 2011 1 次提交
-
-
由 Russell King 提交于
Ensure that the twd timer reload value is reprogrammed each time we enter periodic mode. This ensures that the reload value is always reset correctly. Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NColin Cross <ccross@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 1月, 2011 3 次提交
-
-
由 Russell King 提交于
When DEBUG_LL is not set, we don't want __error_a re-entering __lookup_machine_type - we want it to go to the error function. This used to be the case before we reorganized the layout for hotplug cpu, as we used to fall through to __error. With the changed layout, we need an explicit branch here instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rabin Vincent reports: | On SMP, this BUG() in save_stack_trace_tsk() can be easily triggered | from user space by reading /proc/$PID/stack, where $PID is any pid but | the current process: | | if (tsk != current) { | #ifdef CONFIG_SMP | /* | * What guarantees do we have here that 'tsk' | * is not running on another CPU? | */ | BUG(); | #else Fix this by replacing the BUG() with an entry to terminate the stack trace, returning an empty trace - I'd rather not expose the dwarf unwinder to a volatile stack of a running thread. Reported-by: NRabin Vincent <rabin@rab.in> Tested-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dima Zavin 提交于
Do not use memory bank info to request the "system ram" resources as they do not track holes created by memblock_remove inside machine's reserve callback. If the removed memory is passed as platform_device's ioresource, then drivers that call request_mem_region would fail due to a conflict with the incorrectly configured system ram resource. Instead, iterate through the regions of memblock.memory and add those as "System RAM" resources. Signed-off-by: NDima Zavin <dima@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 1月, 2011 1 次提交
-
-
由 David Rientjes 提交于
Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for module_init(). Much of the code is duplicated and can be generalized in a globally accessible function, __vmalloc_node_range(). __vmalloc_node() now calls into __vmalloc_node_range() with a range of [VMALLOC_START, VMALLOC_END) for functionally equivalent behavior. Each architecture may then use __vmalloc_node_range() directly to remove the duplication of code. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 1月, 2011 2 次提交
-
-
由 Lennert Buytenhek 提交于
Signed-off-by: NLennert Buytenhek <buytenh@secretlab.ca> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lennert Buytenhek 提交于
Signed-off-by: NLennert Buytenhek <buytenh@secretlab.ca>
-
- 12 1月, 2011 4 次提交
-
-
由 Alexander Holler 提交于
When CONFIG_CMDLINE_FORCE is used, the warning Ignoring unrecognised tag 0x54410009 was displayed. Change this to Ignoring tag cmdline (using the default kernel command line) Signed-off-by: NAlexander Holler <holler@ahsoftware.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When running without an MMU, we do not need to install a mapping for the vectors page. Attempting to do so causes a compile-time error because install_special_mapping is not defined. This patch adds compile-time guards to the vector mapping functions so that we can build nommu configurations once more. Acked-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The purpose of the minsec argument is to prevent 64-bit math overflow when the number of cycles is multiplied up. However, the multipler is 32-bit, and in the sched_clock() case, the cycle counter is up to 32-bit as well. So the math can never overflow. With a value of 60, and clock rates greater than 71MHz, the calculated multiplier is unnecessarily reduced in value, which reduces accuracy by maybe 70ppt. It's almost not worth bothering with as the oscillator driving the counter won't be any more than 1ppm - unless you're using a rubidium lamp or caesium fountain frequency standard. So, set the minsec argument to zero. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
sched_clock is supposed to be initialized early - in the recently added init_early platform hook. However, in doing so we end up calling mod_timer() before the timer lists are initialized, resulting in an oops. Split the initialization in two - the part which the platform calls early which starts things off. The addition of the timer can be delayed until after we have more of the kernel initialized - when the normal time sources are initialized. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 1月, 2011 1 次提交
-
-
由 Russell King 提交于
The fraction of MHz was not being displayed correctly as the calculation was a factor of 10 out. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 12月, 2010 5 次提交
-
-
由 Russell King 提交于
This allows platforms to hook into the initialization early to setup things like scheduler clocks, etc. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than storing each machine init hook separately, store a pointer to the machine description record and dereference this instead. This pointer is only available while the init sections are present, which is not a problem as we only use it from init code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Magnus Damm 提交于
Per subarch interrupt handler macros V3. This patch breaks out code from the irq_handler macro into arch_irq_handler and arch_irq_handler_default. The macros are put in the header file "entry-macro-multi.S" The arch_irq_handler_default macro is designed to be used by irq_handler in entry-armv.S while arch_irq_handler is suitable for per-subarch use. Signed-off-by: NMagnus Damm <damm@opensource.se> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 eric miao 提交于
Normally different ARM platform has different way to decode the IRQ hardware status and demultiplex to the corresponding IRQ handler. This is highly optimized by macro irq_handler in entry-armv.S, and each machine defines their own macro to decode the IRQ number. However, this prevents multiple machine classes to be built into a single kernel. By allowing each machine to specify thier own handler, and making function pointer 'handle_arch_irq' to point to it at run time, this can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both solutions to work. Comparing with the highly optimized macro of irq_handler, the new function must be written with care not to lose too much performance. And the IPI stuff on SMP is expected to move to the provided arch IRQ handler as well. The assembly code to invoke handle_arch_irq is optimized by Russell King. Signed-off-by: NEric Miao <eric.miao@canonical.com> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Todd Android Poynor 提交于
If the irqsoff tracer is in use, stop tracing the interrupt disable interval when returning to userspace. Tracing userspace execution time as interrupts disabled time is not helpful for kernel performance analysis purposes. Only do so if the irqsoff tracer is enabled, to avoid overhead for lockdep, which doesn't care. Signed-off-by: NTodd Poynor <toddpoynor@google.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 12月, 2010 1 次提交
-
-
由 Russell King 提交于
Provide common sched_clock() infrastructure for platforms to use to create a 64-bit ns based sched_clock() implementation from a counter running at a non-variable clock rate. This implementation is based upon maintaining an epoch for the counter and an epoch for the nanosecond time. When we desire a sched_clock() time, we calculate the number of counter ticks since the last epoch update, convert this to nanoseconds and add to the epoch nanoseconds. We regularly refresh these epochs within the counter wrap interval. We perform a similar calculation as above, and store the new epochs. We read and write the epochs in such a way that sched_clock() can easily (and locklessly) detect when an update is in progress, and repeat the loading of these constants when they're known not to be stable. The one caveat is that sched_clock() is not called in the middle of an update. We achieve that by disabling IRQs. Finally, if the clock rate is known at compile time, the counter to ns conversion factors can be specified, allowing sched_clock() to be tightly optimized. We ensure that these factors are correct by providing an initialization function which performs a run-time check. Acked-by: NPeter Zijlstra <peterz@infradead.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMikael Pettersson <mikpe@it.uu.se> Tested-by: NEric Miao <eric.y.miao@gmail.com> Tested-by: NOlof Johansson <olof@lixom.net> Tested-by: NJamie Iles <jamie@jamieiles.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 12月, 2010 2 次提交
-
-
由 Russell King 提交于
We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The MMU is always configured to read page tables from the L2 cache so there's little point flushing them out of the L2 cache back to RAM. Remove these flushes. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 12月, 2010 1 次提交
-
-
由 Russell King 提交于
When we soft-CPU hotplug a CPU, we reset the stack pointer and jump back to start_secondary(). This allows us to restart as if the CPU was actually reset. However, we weren't resetting the frame pointer, which could cause problems with backtracing. Reset the frame pointer to zero (which means no parent frame) just like the early assembly code also does. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2010 16 次提交
-
-
由 Russell King 提交于
smp.c is becoming too large, so split out the TLB maintainence broadcasting into a separate smp_tlb.c file. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When a CPU is hot unplugged, the generic tick code cleans up the clock event device, but fails to call down to the device's set_mode function to actually shut the device down. To work around this, we've historically had a local_timer_stop() callback out of the hotplug code. However, this adds needless complexity when we have the clock event device itself available. Explicitly call the clock event device's set_mode function with CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown without any special external callbacks. When/if the generic code is fixed, percpu_timer_stop() can be killed off. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We used to print a bland error message which gave no clue as to the failure when we failed to bring up a secondary CPU. Resolve this by separating the two failure cases. If boot_secondary() fails, we print a message indicating the returned error code from boot_secondary(): "CPU%u: failed to boot: %d\n", cpu, ret. However, if boot_secondary() succeeded, but the CPU did not appear to mark itself online within the timeout, indicate that it failed to come online: "CPU%u: failed to come online\n", cpu Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dave Martin 提交于
* __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Don't call idle_task_exit() with interrupts disabled, and ensure that we have a memory barrier after interrupts are disabled but before signalling that this CPU has shut down. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We always need to wait for the dying CPU to reach a safe state before taking it down, irrespective of the requirements of the platform. Move the completion code into the ARM SMP hotplug code rather than having each platform re-implement this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
All platforms call trace_hardirqs_off() in their secondary startup code, so move this into the core SMP code - it doesn't need to be in the per-platform code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There is a certain amount of smp_prepare_cpus() which doesn't belong in the platform support code - that is, code which is invariant to the SMP implementation. Move this code into arch/arm/kernel/smp.c, and add a platform_ prefix to the original function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Wait for CPUs to indicate that they've stopped, after sending the stop IPI, rather than blindly continuing on and hoping that they've stopped in time. Print a warning if we fail to stop the other CPUs. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to understand which registers can be modified. Also document which registers hold values which must be preserved. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The IPI and local timer interrupts weren't being properly accounted for in /proc/stat. Collect them from the irq_stat structure, and return their sum. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This separates out the individual IPI interrupt counts from the total IPI count, which allows better visibility of what IPIs are being used for. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Haojian Zhuang 提交于
iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions of accessing CP0 and CP1 is changed in PJ4. Append more files to support iwmmxt in PJ4 core. Signed-off-by: NZhou Zhu <zzhu3@marvell.com> Signed-off-by: NHaojian Zhuang <haojian.zhuang@marvell.com> Acked-by: NNicolas Pitre <nico@fluxnic.net> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
由 Russell King 提交于
As per x86, align the initial column according to how many IRQs we have. Also, provide an english explaination for the 'LOC:' and 'IPI:' lines. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the ipi_count into irq_stat, which allows the ipi_data structure to be entirely removed. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Provide __inc_irq_stat() and __get_irq_stat() to increment and read the irq stat counters. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-