- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 10 7月, 2013 5 次提交
-
-
由 Jason Liu 提交于
When the local timer freq changed, the twd_update_frequency function should be run all the CPUs include itself, otherwise, the twd freq will not get updated and the local timer will not run correcttly. smp_call_function will run functions on all other CPUs, but not include himself, this is not correct,use on_each_cpu instead to fix this issue. Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: stable@vger.kernel.org Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NJason Liu <r64343@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Robin Holt 提交于
Merge together the unicore32, arm, and x86 reboot= command line parameter handling. Signed-off-by: NRobin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Acked-by: NIngo Molnar <mingo@kernel.org> Acked-by: NGuan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Robin Holt 提交于
Preparing to move the parsing of reboot= to generic kernel code forces the change in reboot_mode handling to use the enum. [akpm@linux-foundation.org: fix arch/arm/mach-socfpga/socfpga.c] Signed-off-by: NRobin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Robin Holt 提交于
Prepare for the moving the parsing of reboot= to the generic kernel code by making reboot_mode into a more generic form. Signed-off-by: NRobin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Oleg Nesterov 提交于
This reverts commit bf0b8f4b ("hw_breakpoints: Fix racy access to ptrace breakpoints"). The patch was fine but we can no longer race with SIGKILL after commit 9899d11f ("ptrace: ensure arch_ptrace/ptrace_request can never race with SIGKILL"), the __TASK_TRACED tracee can't be woken up and ->ptrace_bps[] can't go away. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jan Kratochvil <jan.kratochvil@redhat.com> Cc: Michael Neuling <mikey@neuling.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Prasad <prasad@linux.vnet.ibm.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 7月, 2013 1 次提交
-
-
由 Stephen Warren 提交于
Macro __INIT is used to place various code in head-common.S into the init section. This should be matched by a closing __FINIT. Also, add an explicit ".text" to ensure subsequent code is placed into the correct section; __FINIT is simply a closing marker to match __INIT and doesn't guarantee to revert to .text. This historically caused no problem, because macro __CPUINIT was used at the exact location where __FINIT was missing, which then placed following code into the cpuinit section. However, with commit 22f0a273 "init.h: remove __cpuinit sections from the kernel" applied, __CPUINIT becomes a no-op, thus leaving all this code in the init section, rather than the regular text section. This caused issues such as secondary CPU boot failures or crashes. Signed-off-by: NStephen Warren <swarren@nvidia.com> Acked-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 6月, 2013 1 次提交
-
-
由 Olof Johansson 提交于
Due to recent changes and expecations of proper cpu bindings, there are now cases for many of the in-tree devicetrees where a WARN() will hit on boot due to badly formatted /cpus nodes. Downgrade this to a pr_warn() to be less alarmist, since it's not a new problem. Tested on Arndale, Cubox, Seaboard and Panda ES. Panda hits the WARN without this, the others do not. Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 6月, 2013 5 次提交
-
-
由 Marc Zyngier 提交于
Looking into the active_asids array is not enough, as we also need to look into the reserved_asids array (they both represent processes that are currently running). Also, not holding the ASID allocator lock is racy, as another CPU could schedule that process and trigger a rollover, making the erratum workaround miss an IPI. Exposing this outside of context.c is a little ugly on the side, so let's define a new entry point that the erratum workaround can call to obtain the cpumask. Cc: <stable@vger.kernel.org> # 3.9 Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jed Davis 提交于
With this change, we no longer lose the innermost entry in the user-mode part of the call chain. See also the x86 port, which includes the ip. It's possible to partially work around this problem by post-processing the data to use the PERF_SAMPLE_IP value, but this works only if the CPU wasn't in the kernel when the sample was taken. Cc: <stable@vger.kernel.org> Signed-off-by: NJed Davis <jld@mozilla.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 André Hentschel 提交于
Since commit 6a1c5312 the user writeable TLS register was zeroed to prevent it from being used as a covert channel between two tasks. There are more and more applications coming to Windows RT, Wine could support them, but mostly they expect to have the thread environment block (TEB) in TPIDRURW. This patch preserves that register per thread instead of clearing it. Unlike the TPIDRURO, which is already switched, the TPIDRURW can be updated from userspace so needs careful treatment in the case that we modify TPIDRURW and call fork(). To avoid this we must always read TPIDRURW in copy_thread. Signed-off-by: NAndré Hentschel <nerv@dawncrow.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The __cpu_logical_map array is statically initialized to 0, which is a valid MPIDR value. To prevent issues with the current implementation, this patch defines an MPIDR_INVALID value, and statically initializes the __cpu_logical_map[] array to it. Entries in the arm_dt_init_cpu_maps() tmp_map array used to stash DT reg properties while parsing DT are initialized with the MPIDR_INVALID value as well for consistency. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The introduction of the cpu-map topology node in the cpus node implies that cpus node might have children that are not cpu nodes. The DT parsing code needs updating otherwise it would check for cpu nodes properties in nodes that are not required to contain them, resulting in warnings that have no bearing on bindings defined in the dts source file. Cc: <stable@vger.kernel.org> [3.8+] Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 6月, 2013 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org> -
由 Lorenzo Pieralisi 提交于
On ARM SMP systems, cores are identified by their MPIDR register. The MPIDR guidelines in the ARM ARM do not provide strict enforcement of MPIDR layout, only recommendations that, if followed, split the MPIDR on ARM 32 bit platforms in three affinity levels. In multi-cluster systems like big.LITTLE, if the affinity guidelines are followed, the MPIDR can not be considered an index anymore. This means that the association between logical CPU in the kernel and the HW CPU identifier becomes somewhat more complicated requiring methods like hashing to associate a given MPIDR to a CPU logical index, in order for the look-up to be carried out in an efficient and scalable way. This patch provides a function in the kernel that starting from the cpu_logical_map, implement collision-free hashing of MPIDR values by checking all significative bits of MPIDR affinity level bitfields. The hashing can then be carried out through bits shifting and ORing; the resulting hash algorithm is a collision-free though not minimal hash that can be executed with few assembly instructions. The mpidr is filtered through a mpidr mask that is built by checking all bits that toggle in the set of MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry information so they do not contribute to the resulting hash. Pseudo code: /* check all bits that toggle, so they are required */ for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++) mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0)); /* * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr * fls() returns the last bit set in a word, 0 if none * ffs() returns the first bit set in a word, 0 if none */ fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0; fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0; fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0; ls0 = fls(mpidr_mask[7:0]); ls1 = fls(mpidr_mask[15:8]); ls2 = fls(mpidr_mask[23:16]); bits0 = ls0 - fs0; bits1 = ls1 - fs1; bits2 = ls2 - fs2; aff0_shift = fs0; aff1_shift = 8 + fs1 - bits0; aff2_shift = 16 + fs2 - (bits0 + bits1); u32 hash(u32 mpidr) { u32 l0, l1, l2; u32 mpidr_masked = mpidr & mpidr_mask; l0 = mpidr_masked & 0xff; l1 = mpidr_masked & 0xff00; l2 = mpidr_masked & 0xff0000; return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift); } The hashing algorithm relies on the inherent properties set in the ARM ARM recommendations for the MPIDR. Exotic configurations, where for instance the MPIDR values at a given affinity level have large holes, can end up requiring big hash tables since the compression of values that can be achieved through shifting is somewhat crippled when holes are present. Kernel warns if the number of buckets of the resulting hash table exceeds the number of possible CPUs by a factor of 4, which is a symptom of a very sparse HW MPIDR configuration. The hash algorithm is quite simple and can easily be implemented in assembly code, to be used in code paths where the kernel virtual address space is not set-up (ie cpu_resume) and instruction and data fetches are strongly ordered so code must be compact and must carry out few data accesses. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org>
-
- 18 6月, 2013 1 次提交
-
-
由 Stephen Warren 提交于
Add comments to machine_shutdown()/halt()/power_off()/restart() that describe their purpose and/or requirements re: CPUs being active/not. In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the requirement that only a single CPU be active for kexec. Adjust Kconfig dependencies for this change. In machine_halt()/power_off()/restart(), call smp_send_stop() directly, rather than via machine_shutdown(); these functions don't need to completely de-activate all CPUs using hotplug, but rather just quiesce them. Remove smp_kill_cpus(), and its call from smp_send_stop(). smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling smp_ops.cpu_die() on the target CPUs first. At least some implementations of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra, for example. Since smp_send_stop() is only used for shutdown, halt, and power-off, there is no need to attempt any kind of CPU hotplug here. Adjust Kconfig to reflect that machine_shutdown() (and hence kexec) relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee that hotplug will work, or even that hotplug is implemented for a particular piece of HW that a multi-platform zImage runs on. Hence, add error-checking to machine_kexec() to determine whether it did work. Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NStephen Warren <swarren@nvidia.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NZhangfei Gao <zhangfei.gao@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 6月, 2013 2 次提交
-
-
由 Jonathan Austin 提交于
Without an MMU it is possible for userspace programs to start executing code in places that they have no business executing. The MPU allows some level of protection against this. This patch protects the vectors page from access by userspace processes. Userspace tasks that dereference a null pointer are already protected by an svc at 0x0 that kills them. However when tasks use an offset from a null pointer (eg a function in a null struct) they miss this carefully placed svc and enter the exception vectors in user mode, ending up in the kernel. This patch causes programs that do this to receive a SEGV instead of happily entering the kernel in user-mode, and hence avoid a 'Bad Mode' panic. As part of this change it is necessary to make sigreturn happen via the stack when there is not an sa_restorer function. This change is invisible to userspace, and irrelevant to code compiled using a uClibc toolchain, which always uses an sa_restorer function. Because we don't get to remap the vectors in !MMU kuser_helpers are not in a defined location, and hence aren't usable. This means we don't need to worry about keeping them accessible from PL0 Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org> CC: Catalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Running an OABI_COMPAT kernel on an SMP platform can lead to fun and games with page aging. If one CPU issues a swi instruction immediately before another CPU decides to mkold the page containing the swi instruction, then we will fault attempting to load the instruction during the vector_swi handler in order to retrieve its immediate field. Since this fault is not currently dealt with by our exception tables, this results in a panic: Unable to handle kernel paging request at virtual address 4020841c pgd = c490c000 [4020841c] *pgd=84451831, *pte=bf05859d, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT SMP ARM Modules linked in: hid_sony(O) CPU: 1 Tainted: G W O (3.4.0-perf-gf496dca-01162-gcbcc62b #1) PC is at vector_swi+0x28/0x88 LR is at 0x40208420 This patch wraps all of the swi instruction loads with the USER macro and provides a shared exception table entry which simply rewinds the saved user PC and returns from the system call (without setting tbl, so there's no worries with tracing or syscall restarting). Returning to userspace will re-enter the page fault handler, from where we will probably send SIGSEGV to the current task. Reported-by: NWang, Yalin <yalin.wang@sonymobile.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 6月, 2013 3 次提交
-
-
由 Stephen Boyd 提交于
Nothing about the sched_clock implementation in the ARM port is specific to the architecture. Generalize the code so that other architectures can use it by selecting GENERIC_SCHED_CLOCK. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> [jstultz: Merge minor collisions with other patches in my tree] Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
由 Stephen Boyd 提交于
If we're suspended and sched_clock() is called we're going to read the hardware one more time and throw away that value and return back the cached value we saved during the suspend callback. This is wasteful. Let's short circuit all that and return the cached value as early as possible if we're suspended. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
由 Stephen Boyd 提交于
The needs_suspend member is unused now that we always do the suspend/resume handling (see 6a4dae5e (ARM: 7565/1: sched: stop sched_clock() during suspend, 2012-10-23)). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 08 6月, 2013 6 次提交
-
-
由 Jonathan Austin 提交于
The MPU initialisation on the primary core is performed in two stages, one minimal stage to ensure the CPU can boot and a second one after sanity_check_meminfo. As the memory configuration is known by the time we boot secondary cores only a single step is necessary, provided the values for DRSR are passed to secondaries. This patch implements this arrangement. The configuration generated for the MPU regions is made available to the secondary core, which can then use the asm MPU intialisation code to program a complete region configuration. This is necessary for SMP configurations without an MMU, as the MPU initialisation is the only way to ensure that memory is specified as 'shared'. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org>
-
由 Jonathan Austin 提交于
This patch adds initial support for using the MPU, which is necessary for SMP operation on PMSAv7 processors because it is the only way to ensure memory is shared. This is an initial patch and full SMP support is added later in this series. The setup of the MPU is performed in a way analagous to that for the MMU: Very early initialisation before the C environment is brought up, followed by a sanity check and more complete initialisation in C. This patch provides the simplest possible memory region configuration: MPU_PROBE_REGION: Reserved for probing MPU details, not enabled MPU_BG_REGION: A 'background' region that specifies all memory strongly ordered MPU_RAM_REGION: A single shared, cacheable, normal region for the valid RAM. In this early initialisation code we simply map the whole of the address space with the BG_REGION and (at least) the kernel with the RAM_REGION. The MPU has region alignment constraints that require us to round past the end of the kernel. As region 2 has a higher priority than region 1, it overrides the strongly- ordered behaviour for RAM only. Subsequent patches will add more complete initialisation from the C-world and support for bringing up secondary CPUs. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Hyok S. Choi <hyok.choi@samsung.com>
-
由 Jonathan Austin 提交于
Without an MMU we don't need to do any TLB maintenance. Until the addition of 93dc6887 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) building the tlb maintenance ops in smp_tlb.c worked, though none of the contents were used. Since that commit, however, SMP NOMMU has not been able to build. This patch restores that ability by making the building of smp_tlb.c dependent on MMU. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARM CPU suspend code can be selected even for a !CONFIG_MMU configuration. The resulting kernel will not compile and, even if it did, would access undefined co-processor registers when executing. This patch fixes the v6 and v7 CPU suspend code for the nommu case. Signed-off-by: NWill Deacon <will.deacon@arm.com> Tested-by: NJonathan Austin <jonathan.austin@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> (commit_signer:1/3=33%) CC: Santosh Shilimkar <santosh.shilimkar@ti.com> (commit_signer:1/3=33%) CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
-
由 Will Deacon 提交于
nommu systems do not require any page tables, so don't try to initialise them when bringing up secondary cores. Signed-off-by: NWill Deacon <will.deacon@arm.com> -
由 Will Deacon 提交于
This patch adds a secondary_startup entry point to head-nommu.S so that we can boot secondary CPUs on an SMP nommu configuration. Signed-off-by: NWill Deacon <will.deacon@arm.com> CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> CC: Nicolas Pitre <nico@linaro.org>
-
- 07 6月, 2013 2 次提交
-
-
由 Marc Zyngier 提交于
When booting the kernel, a bootloader could have left the virtual timer ticking away, potentially generating interrupts. This could be troublesome if the user of the virtual timer is not careful when enabling the interrupt. In order to avoid any surprise, stop the virtual timer from interrupting us when booted in HYP mode, as we'll use the physical timer in this case. Reported-by: NGiridhar Maruthy <giridhar.m@samsung.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Dave Martin <dave.martin@linaro.org>
-
由 Marc Zyngier 提交于
In order to be able to use the virtual counter in a safe way, make sure it is initialized to zero before dropping to SVC. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Dave Martin <dave.martin@linaro.org>
-
- 06 6月, 2013 2 次提交
-
-
由 Arnd Bergmann 提交于
The cpu_die field in smp_operations is not valid with CONFIG_HOTPLUG_CPU, so we must enclose it in #ifdef, but at least that lets us remove two other lines. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Arnd Bergmann 提交于
The cpu_topology symbol is required by any driver using the topology interfaces, which leads to a couple of build errors: ERROR: "cpu_topology" [drivers/net/ethernet/sfc/sfc.ko] undefined! ERROR: "cpu_topology" [drivers/cpufreq/arm_big_little.ko] undefined! ERROR: "cpu_topology" [drivers/block/mtip32xx/mtip32xx.ko] undefined! The obvious solution is to export this symbol. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Cc: Nicolas Pitre <nico@linaro.org> Cc: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 6月, 2013 1 次提交
-
-
由 Stephen Rothwell 提交于
Ever since commit 45f035ab ("CONFIG_HOTPLUG should be always on"), it has been basically impossible to build a kernel with CONFIG_HOTPLUG turned off. Remove all the remaining references to it. Cc: Russell King <linux@arm.linux.org.uk> Cc: Doug Thompson <dougthompson@xmission.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Pavel Machek <pavel@ucw.cz> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NMauro Carvalho Chehab <mchehab@redhat.com> Acked-by: NHans Verkuil <hans.verkuil@cisco.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 03 6月, 2013 1 次提交
-
-
由 Linus Walleij 提交于
When working with device tree support for PCI on ARM you run into a problem when mapping IRQs from the device tree irqmaps: doing this the code in drivers/of/of_pci_irq.c will try to find the OF node on the root bridge and this fails, because bus->dev.of_node is NULL, and that in turn boils down to the fact that pci_set_bus_of_node() has called pcibios_get_phb_of_node() from drivers/pci/of.c to obtain the OF node of the bridge or its parent and none is set and thus NULL is returned. Fix this by adding an additional parent argument API for registering PCI bridges on the ARM architecture called pci_common_init_dev(), and pass along this parent to pci_scan_root_bus() called from pcibios_init_hw() in bios32.c and voila: the IRQ mappings start working: the OF node can be retrieved from the parent. Create the old pci_common_init() as a wrapper around the new call. Cc: Mike Rapoport <mike@compulab.co.il> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmitt <benh@kernel.crashing.org> Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Reviewed-by: NThierry Reding <thierry.reding@avionic-design.de> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 30 5月, 2013 2 次提交
-
-
由 Will Deacon 提交于
CPUs implementing LPAE have atomic ldrd/strd instructions, meaning that userspace software can avoid having to use the exclusive variants of these instructions if they wish. This patch advertises the atomicity of these instructions via the hwcaps, so userspace can detect this CPU feature. Reported-by: NVladimir Danushevsky <vladimir.danushevsky@oracle.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 5月, 2013 1 次提交
-
-
由 Steven Capper 提交于
If one reads /proc/$PID/smaps, the mmap_sem belonging to the address space of the task being examined is locked for reading. All the pages of the vmas belonging to the task's address space are then walked with this lock held. If a gate_vma is present in the architecture, it too is examined by the fs/proc/task_mmu.c code. As gate_vma doesn't belong to the address space of the task though, its pages are not walked. A recent cleanup (commit f6604efe) of the gate_vma initialisation code set the vm_mm value to &init_mm. Unfortunately a non-NULL vm_mm value in the gate_vma will cause the task_mmu code to attempt to walk the pages of the gate_vma (with no mmap-sem lock held). If one enables Transparent Huge Page support and vm debugging, this will then cause OOPses as pmd_trans_huge_lock is called without mmap_sem being locked. This patch removes the .vm_mm value from gate_vma, restoring the original behaviour of the task_mmu code. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 5月, 2013 3 次提交
-
-
由 Stephen Boyd 提交于
Before f7b861b7 ("arm: Use generic idle loop") ARM would kill the CPU within the rcu idle section. Now that the rcu_idle_enter()/exit() pair have been pushed lower down in the idle loop this is no longer true and so using RCU_NONIDLE here is no longer necessary and also harmful because RCU is not actually idle at this point. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jon Medhurst 提交于
Add a new 'smp_init' hook to machine_desc so platforms can specify a function to be used to setup smp ops instead of having a statically defined value. The hook must return true when smp_ops are initialized. If false the static mdesc->smp_ops will be used by default. Add the definition of "bool" by including the linux/types.h file to asm/mach/arch.h and make it self-contained. Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NNicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Stefano Stabellini 提交于
Rename virt_smp_ops to psci_smp_ops and move them to arch/arm/kernel/psci_smp.c. Remove mach-virt/platsmp.c, now unused. Compile psci_smp if CONFIG_ARM_PSCI and CONFIG_SMP. Add a cpu_die smp_op based on psci_ops.cpu_off. Initialize PSCI before setting smp_ops in setup_arch. If PSCI is available on the platform, prefer psci_smp_ops over the platform smp_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NWill Deacon <will.deacon@arm.com> CC: arnd@arndb.de CC: marc.zyngier@arm.com CC: linux@arm.linux.org.uk CC: nico@linaro.org CC: rob.herring@calxeda.com
-
- 16 5月, 2013 1 次提交
-
-
由 Ming Lei 提交于
Commit 14318efb(ARM: 7587/1: implement optimized percpu variable access) introduces arm's __my_cpu_offset to optimize percpu vaiable access, which really works well on hackbench, but will cause __my_cpu_offset to return garbage value before it is initialized in cpu_init() called by setup_arch, so accessing percpu variable before setup_arch may cause kernel hang. But generic __my_cpu_offset always returns zero before percpu area is brought up, and won't hang kernel. So the patch tries to clear __my_cpu_offset on boot CPU early to avoid boot hang. At least now percpu variable is accessed by lockdep before setup_arch(), and enabling CONFIG_LOCK_STAT or CONFIG_DEBUG_LOCKDEP can trigger kernel hang. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-