- 11 8月, 2012 2 次提交
-
-
由 Colin Cross 提交于
Many clocks that are used to provide sched_clock will reset during suspend. If read_sched_clock returns 0 after suspend, sched_clock will appear to jump forward. This patch resets cd.epoch_cyc to the current value of read_sched_clock during resume, which causes sched_clock() just after suspend to return the same value as sched_clock() just before suspend. In addition, during the window where epoch_ns has been updated before suspend, but epoch_cyc has not been updated after suspend, it is unknown whether the clock has reset or not, and sched_clock() could return a bogus value. Add a suspended flag, and return the pre-suspend epoch_ns value during this period. The new behavior is triggered by calling setup_sched_clock_needs_suspend instead of setup_sched_clock. Signed-off-by: NColin Cross <ccross@android.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
Get rid of this warning.. arch/arm/kernel/built-in.o(.text+0xac78): Section mismatch in reference from the function init_cpu_topology() to the function .init.text:parse_dt_topology() The function init_cpu_topology() references the function __init parse_dt_topology(). This is often because init_cpu_topology lacks a __init annotation or the annotation of parse_dt_topology is wrong. Signed-off-by: NVenkatraman S <svenkatr@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 7月, 2012 4 次提交
-
-
由 Russell King 提交于
While trying to get a v3.5 kernel booted on the cubox, I noticed that VFP does not work correctly with VFP bounce handling. This is because of the confusion over 16-bit vs 32-bit instructions, and where PC is supposed to point to. The rule is that FP handlers are entered with regs->ARM_pc pointing at the _next_ instruction to be executed. However, if the exception is not handled, regs->ARM_pc points at the faulting instruction. This is easy for ARM mode, because we know that the next instruction and previous instructions are separated by four bytes. This is not true of Thumb2 though. Since all FP instructions are 32-bit in Thumb2, it makes things easy. We just need to select the appropriate adjustment. Do this by moving the adjustment out of do_undefinstr() into the assembly code, as only the assembly code knows whether it's dealing with a 32-bit or 16-bit instruction. Cc: <stable@vger.kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Javier Martinez Canillas 提交于
On reboot or poweroff (machine_shutdown()) a call to smp_send_stop() is made (to stop the others CPU's) when CONFIG_SMP=y. arch/arm/kernel/process.c: void machine_shutdown(void) { #ifdef CONFIG_SMP smp_send_stop(); #endif } smp_send_stop() calls the function pointer smp_cross_call(), which is set on the smp_init_cpus() function for OMAP processors. arch/arm/mach-omap2/omap-smp.c: void __init smp_init_cpus(void) { ... set_smp_cross_call(gic_raise_softirq); ... } But the ARM setup_arch() function only calls smp_init_cpus() if CONFIG_SMP=y && is_smp(). arm/kernel/setup.c: void __init setup_arch(char **cmdline_p) { ... #ifdef CONFIG_SMP if (is_smp()) smp_init_cpus(); #endif ... } Newer OMAP CPU's are SMP machines so omap2plus_defconfig sets CONFIG_SMP=y. Unfortunately on an OMAP UP machine is_smp() returns false and smp_init_cpus() is never called and the smp_cross_call() function remains NULL. If the machine is rebooted or powered off, smp_send_stop() will be called (since CONFIG_SMP=y) leading to the following error: [ 42.815551] Restarting system. [ 42.819030] Unable to handle kernel NULL pointer dereference at virtual address 00000000 [ 42.827667] pgd = d7a74000 [ 42.830566] [00000000] *pgd=96ce7831, *pte=00000000, *ppte=00000000 [ 42.837249] Internal error: Oops: 80000007 [#1] SMP ARM [ 42.842773] Modules linked in: [ 42.846008] CPU: 0 Not tainted (3.5.0-rc3-next-20120622-00002-g62e87ba-dirty #44) [ 42.854278] PC is at 0x0 [ 42.856994] LR is at smp_send_stop+0x4c/0xe4 [ 42.861511] pc : [<00000000>] lr : [<c00183a4>] psr: 60000013 [ 42.861511] sp : d6c85e70 ip : 00000000 fp : 00000000 [ 42.873626] r10: 00000000 r9 : d6c84000 r8 : 00000002 [ 42.879150] r7 : c07235a0 r6 : c06dd2d0 r5 : 000f4241 r4 : d6c85e74 [ 42.886047] r3 : 00000000 r2 : 00000000 r1 : 00000006 r0 : d6c85e74 [ 42.892944] Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user [ 42.900482] Control: 10c5387d Table: 97a74019 DAC: 00000015 [ 42.906555] Process reboot (pid: 1166, stack limit = 0xd6c842f8) [ 42.912902] Stack: (0xd6c85e70 to 0xd6c86000) [ 42.917510] 5e60: c07235a0 00000000 00000000 d6c84000 [ 42.926177] 5e80: 01234567 c00143d0 4321fedc c00511bc d6c85ebc 00000168 00000460 00000000 [ 42.934814] 5ea0: c1017950 a0000013 c1017900 d8014390 d7ec3858 c0498e48 c1017950 00000000 [ 42.943481] 5ec0: d6ddde10 d6c85f78 00000003 00000000 d6ddde10 d6c84000 00000000 00000000 [ 42.952117] 5ee0: 00000002 00000000 00000000 c0088c88 00000002 00000000 00000000 c00f4b90 [ 42.960784] 5f00: 00000000 d6c85ebc d8014390 d7e311c8 60000013 00000103 00000002 d6c84000 [ 42.969421] 5f20: c00f3274 d6e00a00 00000001 60000013 d6c84000 00000000 00000000 c00895d4 [ 42.978057] 5f40: 00000002 d8007c80 d781f000 c00f6150 d8010cc0 c00f3274 d781f000 d6c84000 [ 42.986694] 5f60: c0013020 d6e00a00 00000001 20000010 0001257c ef000000 00000000 c00895d4 [ 42.995361] 5f80: 00000002 00000001 00000003 00000000 00000001 00000003 00000000 00000058 [ 43.003997] 5fa0: c00130c8 c0012f00 00000001 00000003 fee1dead 28121969 01234567 00000002 [ 43.012634] 5fc0: 00000001 00000003 00000000 00000058 00012584 0001257c 00000001 00000000 [ 43.021270] 5fe0: 000124bc bec5cc6c 00008f9c 4a2f7c40 20000010 fee1dead 00000000 00000000 [ 43.029968] [<c00183a4>] (smp_send_stop+0x4c/0xe4) from [<c00143d0>] (machine_restart+0xc/0x4c) [ 43.039154] [<c00143d0>] (machine_restart+0xc/0x4c) from [<c00511bc>] (sys_reboot+0x144/0x1f0) [ 43.048278] [<c00511bc>] (sys_reboot+0x144/0x1f0) from [<c0012f00>] (ret_fast_syscall+0x0/0x3c) [ 43.057464] Code: bad PC value [ 43.060760] ---[ end trace c3988d1dd0b8f0fb ]--- Add a check so smp_cross_call() is only called when there is more than one CPU on-line. Cc: <stable@vger.kernel.org> Signed-off-by: Javier Martinez Canillas <javier at dowhile0.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Colin Cross 提交于
Commit 722b3c74 modified x86 ftrace to avoid tracing all functions called from irqs when function graph was used with a filter. Port the same fix to ARM. Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NColin Cross <ccross@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
The CPU will endlessly spin at the end of machine_halt and machine_restart calls. However, this will lead to a soft lockup warning after about 20 seconds, if CONFIG_LOCKUP_DETECTOR is enabled, as system timer is still alive. Disable interrupt before going to spin endlessly, so that the lockup warning will never be seen. Cc: <stable@vger.kernel.org> Reported-by: NMarek Vasut <marex@denx.de> Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 7月, 2012 1 次提交
-
-
由 Peter Maydell 提交于
The memory regions which are passed to arm_add_memory() from device tree blobs via early_init_dt_add_memory_arch() can have sizes which are larger than will fit in a 32 bit integer, so switch to using a phys_addr_t to hold them, to avoid silently dropping the top 32 bits of the size. Similarly, use phys_addr_t in early_mem() so that mem=size@start command line options specifying more than 4GB behave sensibly. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 7月, 2012 6 次提交
-
-
由 Will Deacon 提交于
Prior to syscall invocation, __sys_trace only reloads r0-r3 from the kernel stack, preventing the debugger from updating arguments 5-7 when signalled via ptrace. This patch updates the code to reload r0-r6, updating arguments 5 and 6 on the stack (argument 7 is only used by OABI indirect syscalls and can remain in a register). Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Al Viro 提交于
just let do_work_pending() return 1 on normal local restarts and -1 on those that had been caused by ERESTART_RESTARTBLOCK (and 0 is still "all done, sod off to userland now"). And let the asm glue flip scno to restart_syscall(2) one if it got negative from us... [will: resolved conflicts with audit fixes] Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This reverts commit 3b0c0622. We no longer require the restart trampoline for syscall restarting. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This reverts commit 433e2f30. Conflicts: arch/arm/kernel/ptrace.c Reintroduce the new syscall restart handling in preparation for further patches from Al Viro. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 7月, 2012 3 次提交
-
-
由 Vincent Guittot 提交于
Use cpu compatibility field and clock-frequency field of DT to estimate the capacity of each core of the system and to update the cpu_power field accordingly. This patch enables to put more running tasks on big cores than on LITTLE ones. But this patch doesn't ensure that long running tasks will run on big cores and short ones on LITTLE cores. Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org> Reviewed-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Vincent Guittot 提交于
This factorization has also been proposed in another patch that has not been merged yet: http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080873.html So, this patch could be dropped depending of the state of the other one. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org> Reviewed-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Vincent Guittot 提交于
Add infrastructure to be able to modify the cpu_power of each core Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org> Reviewed-by: NNamhyung Kim <namhyung@kernel.org> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 7月, 2012 11 次提交
-
-
由 Will Deacon 提交于
The syscall_trace on ARM takes a `why' parameter to indicate whether or not we are entering or exiting a system call. This can be confusing for people looking at the code since (a) it conflicts with the why register alias in the entry assembly code and (b) it is not immediately clear what it represents. This patch splits up the syscall_trace function into separate wrappers for syscall entry and exit, allowing the low-level syscall handling code to branch to the appropriate function. Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When auditing system calls on ARM, the audit code is called before notifying the parent process in the case that the current task is being ptraced. At this point, the parent (debugger) may choose to change the system call being issued via the SET_SYSCALL ptrace request, causing the wrong system call to be reported to the audit tools. This patch moves the audit calls after the ptrace SIGTRAP handling code in the syscall tracing implementation. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
ret_from_fork is setup for a freshly spawned child task via copy_thread, called from copy_process. The latter function clears TIF_SYSCALL_TRACE and also resets the child task's audit_context to NULL, meaning that there is little point invoking the system call tracing routines. Furthermore, getting hold of the syscall number is a complete pain and it looks like the current code doesn't even bother. This patch removes the syscall tracing checks from ret_from_fork. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This patch allows a timer-based delay implementation to be selected by switching the delay routines over to use get_cycles, which is implemented in terms of read_current_timer. This further allows us to skip the loop calibration and have a consistent delay function in the face of core frequency scaling. To avoid the pain of dealing with memory-mapped counters, this implementation uses the co-processor interface to the architected timers when they are available. The previous loop-based implementation is kept around for CPUs without the architected timers and we retain both the maximum delay (2ms) and the corresponding conversion factors for determining the number of loops required for a given interval. Since the indirection of the timer routines will only work when called from C, the sa1100 sleep routines are modified to branch to the loop-based delay functions directly. Tested-by: NShinya Kuribayashi <shinya.kuribayashi.px@renesas.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This patch implements read_current_timer using the architected timers when they are selected via CONFIG_ARM_ARCH_TIMER. If they are detected not to be usable at runtime, we return -ENXIO to the caller. Furthermore, if read_current_timer is exported then we can implement get_cycles in terms of it for use as both an entropy source and for implementing __udelay and friends. Tested-by: NShinya Kuribayashi <shinya.kuribayashi.px@renesas.com> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This patch implements the word-at-a-time interface for ARM using the same algorithm as x86. We use the fls macro from ARMv5 onwards, where we have a clz instruction available which saves us a mov instruction when targetting Thumb-2. For older CPUs, we use the magic 0x0ff0001 constant. Big-endian configurations make use of the implementation from asm-generic. With this implemented, we can replace our byte-at-a-time strnlen_user and strncpy_from_user functions with the optimised generic versions. Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
In order to provide PMU name strings compatible with the OProfile user ABI, an enumeration of all PMUs is currently used by perf to identify each PMU uniquely. Unfortunately, this does not scale well in the presence of multiple PMUs and creates a single, global namespace across all PMUs in the system. This patch removes the enumeration and instead uses the name string for the PMU to map onto the OProfile variant. perf_pmu_name is implemented for CPU PMUs, which is all that OProfile cares about anyway. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
When a CPU is shutdown its architected timer comparators registers are lost. Within CPU idle, before processors enter shutdown they enter clock events broadcast mode through the clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, cpuid); function where the local timers are emulated by a global always-on timer. On CPU resume, the per-CPU tick device normal mode is restored by exiting broadcast mode through clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, cpuid); In order for this mechanism to function, architected timers should add to their feature C3STOP, which means that they are not able to function when the CPU is in off-mode. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Let's map the initial RAM up to the end of the kernel .bss instead of the strict kernel image area. This simplifies the code as the kernel image only needs to be handled specially in the XIP case. That covers the legacy ATAG location as well. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rabin Vincent 提交于
Robustify ARM's die() handling with improvements from x86: - Fix for a deadlock (before panic in the case of panic_on_oops) if we oops under a spinlock which is also used from interrupt handler, since the old code was unconditionally enabling interrupts. - Usage of arch spinlock so lockdep etc doesn't get involved while we're trying to dump out oopses. - Deadlock prevention in the unlikely event that die() recurses. The changes all touch the same few lines of code, so they're done together in one patch. Signed-off-by: NRabin Vincent <rabin.vincent@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
While running hotplug tests I ran into this RCU splat =============================== [ INFO: suspicious RCU usage. ] 3.4.0 #3275 Tainted: G W ------------------------------- include/linux/rcupdate.h:729 rcu_read_lock() used illegally while idle! other info that might help us debug this: RCU used illegally from idle CPU! rcu_scheduler_active = 1, debug_locks = 0 RCU used illegally from extended quiescent state! 4 locks held by swapper/2/0: #0: ((cpu_died).wait.lock){......}, at: [<c00ab128>] complete+0x1c/0x5c #1: (&p->pi_lock){-.-.-.}, at: [<c00b275c>] try_to_wake_up+0x2c/0x388 #2: (&rq->lock){-.-.-.}, at: [<c00b2860>] try_to_wake_up+0x130/0x388 #3: (rcu_read_lock){.+.+..}, at: [<c00abe5c>] cpuacct_charge+0x28/0x1f4 stack backtrace: [<c001521c>] (unwind_backtrace+0x0/0x12c) from [<c00abec8>] (cpuacct_charge+0x94/0x1f4) [<c00abec8>] (cpuacct_charge+0x94/0x1f4) from [<c00b395c>] (update_curr+0x24c/0x2c8) [<c00b395c>] (update_curr+0x24c/0x2c8) from [<c00b59c4>] (enqueue_task_fair+0x50/0x194) [<c00b59c4>] (enqueue_task_fair+0x50/0x194) from [<c00afea4>] (enqueue_task+0x30/0x34) [<c00afea4>] (enqueue_task+0x30/0x34) from [<c00b0908>] (ttwu_activate+0x14/0x38) [<c00b0908>] (ttwu_activate+0x14/0x38) from [<c00b28a8>] (try_to_wake_up+0x178/0x388) [<c00b28a8>] (try_to_wake_up+0x178/0x388) from [<c00a82a0>] (__wake_up_common+0x34/0x78) [<c00a82a0>] (__wake_up_common+0x34/0x78) from [<c00ab154>] (complete+0x48/0x5c) [<c00ab154>] (complete+0x48/0x5c) from [<c07db7cc>] (cpu_die+0x2c/0x58) [<c07db7cc>] (cpu_die+0x2c/0x58) from [<c000f954>] (cpu_idle+0x64/0xfc) [<c000f954>] (cpu_idle+0x64/0xfc) from [<80208160>] (0x80208160) When a cpu is marked offline during its idle thread it calls cpu_die() during an RCU idle period. cpu_die() calls complete() to notify the killing process that the cpu has died. complete() calls into the scheduler code and eventually grabs an RCU read lock in cpuacct_charge(). Mark complete() as RCU_NONIDLE so that RCU pays attention to this CPU for the duration of the complete() function even though it's in idle. Suggested-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 7月, 2012 4 次提交
-
-
由 Rabin Vincent 提交于
'sub pc, pc, #1b-2b+8-2' results in address<1:0> == '10'. sub pc, pc, #const (== ADR pc, #const) performs an interworking branch (BXWritePC()) on ARMv7+ and a simple branch (BranchWritePC()) on earlier versions. In ARM state, BXWritePC() is UNPREDICTABLE when address<1:0> == '10'. In ARM state on ARMv6+, BranchWritePC() ignores address<1:0>. Before ARMv6, BranchWritePC() is UNPREDICTABLE if address<1:0> != '00' So the instruction is UNPREDICTABLE both before and after v6. Acked-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NRabin Vincent <rabin.vincent@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
We currently return -EPERM if the user requests mode exclusion that is not supported by the CPU. This looks pretty confusing from userspace and is inconsistent with other architectures (ppc, x86). This patch returns -EOPNOTSUPP instead. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This reverts commit 6b5c8045. Conflicts: arch/arm/kernel/ptrace.c The new syscall restarting code can lead to problems if we take an interrupt in userspace just before restarting the svc instruction. If a signal is delivered when returning from the interrupt, the TIF_SYSCALL_RESTARTSYS will remain set and cause any syscalls executed from the signal handler to be treated as a restart of the previously interrupted system call. This includes the final sigreturn call, meaning that we may fail to exit from the signal context. Furthermore, if a system call made from the signal handler requires a restart via the restart_block, it is possible to clear the thread flag and fail to restart the originally interrupted system call. The right solution to this problem is to perform the restarting in the kernel, avoiding the possibility of handling a further signal before the restart is complete. Since we're almost at -rc6, let's revert the new method for now and aim for in-kernel restarting at a later date. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
This reverts commit fa18484d. We need the restart trampoline back so that we can revert a related problematic patch 6b5c8045 ("arm: new way of handling ERESTART_RESTARTBLOCK"). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 7月, 2012 1 次提交
-
-
由 Ludovic Desroches 提交于
The Advanced Interrupt Controller allows us to use the fast EOI handler type. It lets us remove the Atmel specific workaround into arch/arm/kernel/irq.c used to indicate to the AIC the end of the interrupt treatment. Signed-off-by: NLudovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 7月, 2012 1 次提交
-
-
由 Shawn Guo 提交于
The commit a2be01b1 (ARM: only include mach/irqs.h for !SPARSE_IRQ) makes mach/irqs.h only be included for !SPARSE_IRQ build. There are a nubmer of platforms have FIQ_START defined in mach/irqs.h for FIQ support. arch/arm/mach-rpc/include/mach/irqs.h:#define FIQ_START 64 arch/arm/mach-s3c24xx/include/mach/irqs.h:#define FIQ_START IRQ_EINT0 arch/arm/plat-mxc/include/mach/irqs.h:#define FIQ_START 0 If SPARSE_IRQ is enabled for any of these platforms, the following compile error will be seen. arch/arm/kernel/fiq.c: In function ‘enable_fiq’: arch/arm/kernel/fiq.c:127:19: error: ‘FIQ_START’ undeclared (first use in this function) arch/arm/kernel/fiq.c:127:19: note: each undeclared identifier is reported only once for each function it appears in arch/arm/kernel/fiq.c: In function ‘disable_fiq’: arch/arm/kernel/fiq.c:132:20: error: ‘FIQ_START’ undeclared (first use in this function) The patch changes fiq code to have init_FIQ take FIQ_START from platforms as a parameter and assign it to variable fiq_start which is to replace FIQ_START uses in enable_fiq/disable_fiq. Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Cc: Kukjin Kim <kgene.kim@samsung.com> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Rob Herring <rob.herring@calxeda.com> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 6月, 2012 1 次提交
-
-
由 David Brown 提交于
ARM builds seem to be plagued by an occasional build error: Inconsistent kallsyms data This is a bug - please report about it Try "make KALLSYMS_EXTRA_PASS=1" as a workaround The problem has to do with alignment of some sections by the linker. The kallsyms data is built in two passes by first linking the kernel without it, and then linking the kernel again with the symbols included. Normally, this just shifts the symbols, without changing their order, and the compression used by the kallsyms gives the same result. On non SMP, the per CPU data is empty. Depending on the where the alignment ends up, it can come out as either: +-------------------+ | last text segment | +-------------------+ /* padding */ +-------------------+ <- L1_CACHE_BYTES alignemnt | per cpu (empty) | +-------------------+ __per_cpu_end: /* padding */ __data_loc: +-------------------+ <- THREAD_SIZE alignment | data | +-------------------+ or +-------------------+ | last text segment | +-------------------+ /* padding */ +-------------------+ <- L1_CACHE_BYTES alignemnt | per cpu (empty) | +-------------------+ __per_cpu_end: /* no padding */ __data_loc: +-------------------+ <- THREAD_SIZE alignment | data | +-------------------+ if the alignment satisfies both. Because symbols that have the same address are sorted by 'nm -n', the second case will be in a different order than the first case. This changes the compression, changing the size of the kallsym data, causing the build failure. The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still possible to have the alignment change between the second and third pass. It's probably even possible for it to never reach a fixedpoint. The problem only occurs on non-SMP, when the per-cpu data is empty, and when the data segment has alignment (and immediately follows the text segments). Fix this by only including the per_cpu section on SMP, when it is not empty. Signed-off-by: NDavid Brown <davidb@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 6月, 2012 1 次提交
-
-
由 Will Deacon 提交于
Fixup entries in the kernel exception tables should be 4-byte aligned since we return directly to them when handling a faulting instruction in the kernel. This patch adds the missing align directives to the fixup entries. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 6月, 2012 2 次提交
-
-
由 Rabin Vincent 提交于
t32_simulate_ldr_literal() can be run without an instruction slot, so it should be using DECODE_SIMULATEX instead of DECODE_EMULATEX. Acked-by: NJon Medhurst <tixy@yxit.co.uk> Signed-off-by: NRabin Vincent <rabin.vincent@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Yinghai Lu 提交于
Replace the struct pci_bus secondary/subordinate members with the struct resource busn_res. Later we'll build a resource tree of these bus numbers. [bhelgaas: changelog] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 12 6月, 2012 1 次提交
-
-
The fixups are executed once the pci-device is found which is during boot process so __init seems fine as long as the platform does not support hotplug. However it is possible to remove the PCI bus at run time and have it rediscovered again via "echo 1 > /sys/bus/pci/rescan" and this will call the fixups again. Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NSebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 02 6月, 2012 2 次提交
-
-
由 Al Viro 提交于
Does block_sigmask() + tracehook_signal_handler(); called when sigframe has been successfully built. All architectures converted to it; block_sigmask() itself is gone now (merged into this one). I'm still not too happy with the signature, but that's a separate story (IMO we need a structure that would contain signal number + siginfo + k_sigaction, so that get_signal_to_deliver() would fill one, signal_delivered(), handle_signal() and probably setup...frame() - take one). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
Only 3 out of 63 do not. Renamed the current variant to __set_current_blocked(), added set_current_blocked() that will exclude unblockable signals, switched open-coded instances to it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-