- 26 7月, 2013 3 次提交
-
-
由 Catalin Marinas 提交于
As of commit b9d4d42a (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the finish_arch_post_lock_switch() function to avoid whole cache flushing with interrupts disabled. The need for deferred mm switch is stored as a thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can have another thread switch before finish_arch_post_lock_switch(). If the new thread has the same mm as the previous 'next' thread, the scheduler will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for the new thread. This patch moves the switch pending flag to the mm_context_t structure since this is specific to the mm rather than thread. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NMarc Kleine-Budde <mkl@pengutronix.de> Tested-by: NMarc Kleine-Budde <mkl@pengutronix.de> Cc: <stable@vger.kernel.org> # 3.5+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Fabio Estevam 提交于
Commit 93dc6887 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8): Internal error: Oops - undefined instruction: 0 [#1] SMP ARM CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881 task: df46cc00 ti: df48e000 task.ti: df48e000 PC is at check_and_switch_context+0x17c/0x4d0 LR is at check_and_switch_context+0xdc/0x4d0 This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not. To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15. Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NRoger Quadros <rogerq@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a cached value of __boot_cpu_mode may be incoherent with that in memory. This could lead to a failure to detect mismatched boot modes. This patch adds flushing to ensure that writes by secondaries to __boot_cpu_mode are made visible before we test against it. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@cs.columbia.edu> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 10 7月, 2013 2 次提交
-
-
由 Robin Holt 提交于
Preparing to move the parsing of reboot= to generic kernel code forces the change in reboot_mode handling to use the enum. [akpm@linux-foundation.org: fix arch/arm/mach-socfpga/socfpga.c] Signed-off-by: NRobin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Robin Holt 提交于
Prepare for the moving the parsing of reboot= to the generic kernel code by making reboot_mode into a more generic form. Signed-off-by: NRobin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Russ Anderson <rja@sgi.com> Cc: Robin Holt <holt@sgi.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 7月, 2013 3 次提交
-
-
由 Nishanth Menon 提交于
On platforms such as Cortex-A15 based OMAP5, SCU is not used, however since much code is shared between Cortex-A9 based OMAP4 (which uses SCU) and OMAP5, It does help to have inline functions returning error values when SCU is not present on the platform. arch/arm/mach-omap2/omap-smp.c which is common between OMAP4 and 5 handles the SCU usage only for OMAP4. This fixes the following build failure with OMAP5 only build: arch/arm/mach-omap2/built-in.o: In function `omap4_smp_init_cpus': arch/arm/mach-omap2/omap-smp.c:185: undefined reference to `scu_get_core_count' arch/arm/mach-omap2/built-in.o: In function `omap4_smp_prepare_cpus': arch/arm/mach-omap2/omap-smp.c:211: undefined reference to `scu_enable' Reported-by: NPekon Gupta <pekon@ti.com> Reported-by: NVincent Stehlé <v-stehle@ti.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Jiang Liu 提交于
VALID_PAGE() has been removed from kernel long time ago, so fix the comment. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Stephen Boyd <sboyd@codeaurora.org> Cc: Giancarlo Asnaghi <giancarlo.asnaghi@st.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 6月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 6月, 2013 6 次提交
-
-
由 Geoff Levand 提交于
Commit d21a1c83 (ARM: KVM: define KVM_ARM_MAX_VCPUS unconditionally) changed the Kconfig logic for KVM_ARM_MAX_VCPUS to work around a build error arising from the use of KVM_ARM_MAX_VCPUS when CONFIG_KVM=n. The resulting Kconfig logic is a bit awkward and leaves a KVM_ARM_MAX_VCPUS always defined in the kernel config file. This change reverts the Kconfig logic back and adds a simple preprocessor conditional in kvm_host.h to handle when CONFIG_KVM_ARM_MAX_VCPUS is undefined. Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Not saving PAR is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAR, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. [ Slightly tweaked to use an even register as first operand to ldrd and strd operations in interrupts_head.S - Christoffer ] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
S2_PGD_SIZE defines the number of pages used by a stage-2 PGD and is unused, except for a VM_BUG_ON check that missuses the define. As the check is very unlikely to ever triggered except in circumstances where KVM is the least of our worries, just kill both the define and the VM_BUG_ON check. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
Admitedly, reading a MMIO register to load PC is very weird. Writing PC to a MMIO register is probably even worse. But the architecture doesn't forbid any of these, and injecting a Prefetch Abort is the wrong thing to do anyway. Remove this check altogether, and let the adventurous guest wander into LaLaLand if they feel compelled to do so. Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
HYP PGDs are passed around as phys_addr_t, except just before calling into the hypervisor init code, where they are cast to a rather weird unsigned long long. Just keep them around as phys_addr_t, which is what makes the most sense. Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
__kvm_tlb_flush_vmid has been renamed to __kvm_tlb_flush_vmid_ipa, and the old prototype should have been removed when the code was modified. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
- 24 6月, 2013 4 次提交
-
-
由 Marc Zyngier 提交于
Looking into the active_asids array is not enough, as we also need to look into the reserved_asids array (they both represent processes that are currently running). Also, not holding the ASID allocator lock is racy, as another CPU could schedule that process and trigger a rollover, making the erratum workaround miss an IPI. Exposing this outside of context.c is a little ugly on the side, so let's define a new entry point that the erratum workaround can call to obtain the cpumask. Cc: <stable@vger.kernel.org> # 3.9 Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 André Hentschel 提交于
Since commit 6a1c5312 the user writeable TLS register was zeroed to prevent it from being used as a covert channel between two tasks. There are more and more applications coming to Windows RT, Wine could support them, but mostly they expect to have the thread environment block (TEB) in TPIDRURW. This patch preserves that register per thread instead of clearing it. Unlike the TPIDRURO, which is already switched, the TPIDRURW can be updated from userspace so needs careful treatment in the case that we modify TPIDRURW and call fork(). To avoid this we must always read TPIDRURW in copy_thread. Signed-off-by: NAndré Hentschel <nerv@dawncrow.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This commit fixes the regression on Armada 370 (the kernal hang during boot) introduced by the commit: "ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead". When coming out of either a Wait for Interrupt (WFI) or a Wait for Event (WFE) IDLE states, a specific timing sensitivity exists between the retiring WFI/WFE instructions and the newly issued subsequent instructions. This sensitivity can result in a CPU hang scenario. The workaround is to insert either a Data Synchronization Barrier (DSB) or Data Memory Barrier (DMB) command immediately after the WFI/WFE instruction. This commit was based on the work of Lior Amsalem, but heavily modified to apply the errata fix dynamically according to the processor type thanks to the suggestions of Russell King and Nicolas Pitre. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NWilly Tarreau <w@1wt.eu> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The __cpu_logical_map array is statically initialized to 0, which is a valid MPIDR value. To prevent issues with the current implementation, this patch defines an MPIDR_INVALID value, and statically initializes the __cpu_logical_map[] array to it. Entries in the arm_dt_init_cpu_maps() tmp_map array used to stash DT reg properties while parsing DT are initialized with the MPIDR_INVALID value as well for consistency. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 6月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
Some new users of the ARM sched_clock framework are going through the arm-soc tree. Before 38ff87f7 (sched_clock: Make ARM's sched_clock generic for all architectures, 2013-06-01) the header file was in asm, but now it's in linux. One solution would be to do an evil merge of the arm-soc tree and fix up the asm users, but it's easier to add a temporary asm header that we can remove along with the few stragglers after the merge window is over. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 20 6月, 2013 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org>
-
由 Lorenzo Pieralisi 提交于
On ARM SMP systems, cores are identified by their MPIDR register. The MPIDR guidelines in the ARM ARM do not provide strict enforcement of MPIDR layout, only recommendations that, if followed, split the MPIDR on ARM 32 bit platforms in three affinity levels. In multi-cluster systems like big.LITTLE, if the affinity guidelines are followed, the MPIDR can not be considered an index anymore. This means that the association between logical CPU in the kernel and the HW CPU identifier becomes somewhat more complicated requiring methods like hashing to associate a given MPIDR to a CPU logical index, in order for the look-up to be carried out in an efficient and scalable way. This patch provides a function in the kernel that starting from the cpu_logical_map, implement collision-free hashing of MPIDR values by checking all significative bits of MPIDR affinity level bitfields. The hashing can then be carried out through bits shifting and ORing; the resulting hash algorithm is a collision-free though not minimal hash that can be executed with few assembly instructions. The mpidr is filtered through a mpidr mask that is built by checking all bits that toggle in the set of MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry information so they do not contribute to the resulting hash. Pseudo code: /* check all bits that toggle, so they are required */ for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++) mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0)); /* * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr * fls() returns the last bit set in a word, 0 if none * ffs() returns the first bit set in a word, 0 if none */ fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0; fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0; fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0; ls0 = fls(mpidr_mask[7:0]); ls1 = fls(mpidr_mask[15:8]); ls2 = fls(mpidr_mask[23:16]); bits0 = ls0 - fs0; bits1 = ls1 - fs1; bits2 = ls2 - fs2; aff0_shift = fs0; aff1_shift = 8 + fs1 - bits0; aff2_shift = 16 + fs2 - (bits0 + bits1); u32 hash(u32 mpidr) { u32 l0, l1, l2; u32 mpidr_masked = mpidr & mpidr_mask; l0 = mpidr_masked & 0xff; l1 = mpidr_masked & 0xff00; l2 = mpidr_masked & 0xff0000; return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift); } The hashing algorithm relies on the inherent properties set in the ARM ARM recommendations for the MPIDR. Exotic configurations, where for instance the MPIDR values at a given affinity level have large holes, can end up requiring big hash tables since the compression of values that can be achieved through shifting is somewhat crippled when holes are present. Kernel warns if the number of buckets of the resulting hash table exceeds the number of possible CPUs by a factor of 4, which is a symptom of a very sparse HW MPIDR configuration. The hash algorithm is quite simple and can easily be implemented in assembly code, to be used in code paths where the kernel virtual address space is not set-up (ie cpu_resume) and instruction and data fetches are strongly ordered so code must be compact and must carry out few data accesses. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org>
-
- 17 6月, 2013 4 次提交
-
-
由 Jonathan Austin 提交于
Without an MMU it is possible for userspace programs to start executing code in places that they have no business executing. The MPU allows some level of protection against this. This patch protects the vectors page from access by userspace processes. Userspace tasks that dereference a null pointer are already protected by an svc at 0x0 that kills them. However when tasks use an offset from a null pointer (eg a function in a null struct) they miss this carefully placed svc and enter the exception vectors in user mode, ending up in the kernel. This patch causes programs that do this to receive a SEGV instead of happily entering the kernel in user-mode, and hence avoid a 'Bad Mode' panic. As part of this change it is necessary to make sigreturn happen via the stack when there is not an sa_restorer function. This change is invisible to userspace, and irrelevant to code compiled using a uClibc toolchain, which always uses an sa_restorer function. Because we don't get to remap the vectors in !MMU kuser_helpers are not in a defined location, and hence aren't usable. This means we don't need to worry about keeping them accessible from PL0 Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org> CC: Catalin Marinas <catalin.marinas@arm.com>
-
由 Simon Baatz 提交于
Commit f8b63c18 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: NSimon Baatz <gmbnomis@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When scheduling an mm on a CPU where it hasn't previously been used, we flush the icache on that CPU so that any code loaded previously on a different core can be safely executed. For cores with hardware broadcasting of cache maintenance operations, this is clearly unnecessary, since the inner-shareable invalidation in __sync_icache_dcache will affect all CPUs. This patch conditionalises the icache flush in switch_mm based on cache_ops_need_broadcast(). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NAlbin Tonnerre <albin.tonnerre@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
An exclusive store instruction may fail for reasons other than lock contention (e.g. a cache eviction during the critical section) so, in line with other architectures using similar exclusive instructions (alpha, mips, powerpc), retry the trylock operation if the lock appears to be free but the strex reported failure. Reported-by: NTony Thompson <anthony.thompson@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 6月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
Nothing about the sched_clock implementation in the ARM port is specific to the architecture. Generalize the code so that other architectures can use it by selecting GENERIC_SCHED_CLOCK. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> [jstultz: Merge minor collisions with other patches in my tree] Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 08 6月, 2013 7 次提交
-
-
由 Jonathan Austin 提交于
The MPU initialisation on the primary core is performed in two stages, one minimal stage to ensure the CPU can boot and a second one after sanity_check_meminfo. As the memory configuration is known by the time we boot secondary cores only a single step is necessary, provided the values for DRSR are passed to secondaries. This patch implements this arrangement. The configuration generated for the MPU regions is made available to the secondary core, which can then use the asm MPU intialisation code to program a complete region configuration. This is necessary for SMP configurations without an MMU, as the MPU initialisation is the only way to ensure that memory is specified as 'shared'. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org>
-
由 Jonathan Austin 提交于
This patch adds initial support for using the MPU, which is necessary for SMP operation on PMSAv7 processors because it is the only way to ensure memory is shared. This is an initial patch and full SMP support is added later in this series. The setup of the MPU is performed in a way analagous to that for the MMU: Very early initialisation before the C environment is brought up, followed by a sanity check and more complete initialisation in C. This patch provides the simplest possible memory region configuration: MPU_PROBE_REGION: Reserved for probing MPU details, not enabled MPU_BG_REGION: A 'background' region that specifies all memory strongly ordered MPU_RAM_REGION: A single shared, cacheable, normal region for the valid RAM. In this early initialisation code we simply map the whole of the address space with the BG_REGION and (at least) the kernel with the RAM_REGION. The MPU has region alignment constraints that require us to round past the end of the kernel. As region 2 has a higher priority than region 1, it overrides the strongly- ordered behaviour for RAM only. Subsequent patches will add more complete initialisation from the C-world and support for bringing up secondary CPUs. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Hyok S. Choi <hyok.choi@samsung.com>
-
由 Jonathan Austin 提交于
This commit adds definitions relevant to the ARM v7 PMSA compliant MPU. The register layouts and region configuration data is made accessible to asm as well as C-code so that it can be used in early bring-up of the MPU. The mpu region information structs assume that the properties for the I/D side are the same, though the implementation could be trivially extended for future platforms where this is no-longer true. The MPU_*_REGION defines are used for the basic, static MPU region setup. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
由 Jonathan Austin 提交于
This patch adds the following definitions relevant to the PMSA: Add SCTLR bit 17, (CR_BR - Background Region bit) to the list of CR_* bitfields. This bit determines whether to use the architecturally defined memory map Add the MPUIR to the available registers when using read_cpuid macro. The MPUIR is the MPU type register. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC:"Uwe Kleine-König" <u.kleine-koenig@pengutronix.de>
-
由 Jonathan Austin 提交于
Since the merging of Will's tlb-ops branch, specifically 89c7e4b8 (ARM: 7661/1: mm: perform explicit branch predictor maintenance when required), building SMP without CONFIG_MMU has been broken. The local_flush_bp_all function is only called for operations related to changing the kernel's view of memory and ASID rollover - both of which are irrelevant to an !MMU kernel. This patch adds a stub local_flush_bp_all() function to the other tlb maintenance stubs and restores the ability to build an SMP !MMU kernel. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
cpu_switch_mm is a logical nop on nommu systems, so define it as such when !CONFIG_MMU. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
nommu platforms do not perform address translation and therefore clearly don't have TLBs. However, some SMP code assumes the presence of the TLB flushing routines and will therefore fail to compile for a nommu system. This patch defines dummy local_* TLB operations and #defines tlb_ops_need_broadcast() as 0, therefore causing the usual ARM SMP TLB operations to call the local variants instead. Signed-off-by: NWill Deacon <will.deacon@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Nicolas Pitre <nico@linaro.org>
-
- 07 6月, 2013 1 次提交
-
-
由 Mark Rutland 提交于
Switching between reading the virtual or physical counters is problematic, as some core code wants a view of time before we're fully set up. Using a function pointer and switching the source after the first read can make time appear to go backwards, and having a check in the read function is an unfortunate block on what we want to be a fast path. Instead, this patch makes us always use the virtual counters. If we're a guest, or don't have hyp mode, we'll use the virtual timers, and as such don't care about CNTVOFF as long as it doesn't change in such a way as to make time appear to travel backwards. As the guest will use the virtual timers, a (potential) KVM host must use the physical timers (which can wake up the host even if they fire while a guest is executing), and hence a host must have CNTVOFF set to zero so as to have a consistent view of time between the physical timers and virtual counters. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Rob Herring <rob.herring@calxeda.com>
-
- 06 6月, 2013 2 次提交
-
-
由 Peter Zijlstra 提交于
Since the introduction of preemptible mmu_gather TLB fast mode has been broken. TLB fast mode relies on there being absolutely no concurrency; it frees pages first and invalidates TLBs later. However now we can get concurrency and stuff goes *bang*. This patch removes all tlb_fast_mode() code; it was found the better option vs trying to patch the hole by entangling tlb invalidation with the scheduler. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Reported-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Will Deacon 提交于
__my_cpu_offset is non-volatile, since we want its value to be cached when we access several per-cpu variables in a row with preemption disabled. This means that we rely on preempt_{en,dis}able to hazard with the operation via the barrier() macro, so that we can't end up migrating CPUs without reloading the per-cpu offset. Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile asm block as a side-effect, and will happily re-order it before other memory clobbers (including those in prempt_disable()) and cache the value. This has been observed to break the cmpxchg logic in the slub allocator, leading to livelock in kmem_cache_alloc in mainline kernels. This patch adds a dummy memory input operand to __my_cpu_offset, forcing it to be ordered with respect to the barrier() macro. Cc: <stable@vger.kernel.org> Cc: Rob Herring <rob.herring@calxeda.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 6月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Define xen_remap as ioremap_cache (MT_MEMORY and MT_DEVICE_CACHED end up having the same AttrIndx encoding). Remove include asm/mach/map.h, not unneeded. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 04 6月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The patch adds support for THP (transparent huge pages) to LPAE systems. When this feature is enabled, the kernel tries to map anonymous pages as 2MB sections where possible. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [steve.capper@linaro.org: symbolic constants used, value of PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h, added PROT_NONE support.] Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-