- 24 6月, 2013 4 次提交
-
-
由 Marc Zyngier 提交于
Looking into the active_asids array is not enough, as we also need to look into the reserved_asids array (they both represent processes that are currently running). Also, not holding the ASID allocator lock is racy, as another CPU could schedule that process and trigger a rollover, making the erratum workaround miss an IPI. Exposing this outside of context.c is a little ugly on the side, so let's define a new entry point that the erratum workaround can call to obtain the cpumask. Cc: <stable@vger.kernel.org> # 3.9 Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 André Hentschel 提交于
Since commit 6a1c5312 the user writeable TLS register was zeroed to prevent it from being used as a covert channel between two tasks. There are more and more applications coming to Windows RT, Wine could support them, but mostly they expect to have the thread environment block (TEB) in TPIDRURW. This patch preserves that register per thread instead of clearing it. Unlike the TPIDRURO, which is already switched, the TPIDRURW can be updated from userspace so needs careful treatment in the case that we modify TPIDRURW and call fork(). To avoid this we must always read TPIDRURW in copy_thread. Signed-off-by: NAndré Hentschel <nerv@dawncrow.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This commit fixes the regression on Armada 370 (the kernal hang during boot) introduced by the commit: "ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead". When coming out of either a Wait for Interrupt (WFI) or a Wait for Event (WFE) IDLE states, a specific timing sensitivity exists between the retiring WFI/WFE instructions and the newly issued subsequent instructions. This sensitivity can result in a CPU hang scenario. The workaround is to insert either a Data Synchronization Barrier (DSB) or Data Memory Barrier (DMB) command immediately after the WFI/WFE instruction. This commit was based on the work of Lior Amsalem, but heavily modified to apply the errata fix dynamically according to the processor type thanks to the suggestions of Russell King and Nicolas Pitre. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NWilly Tarreau <w@1wt.eu> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The __cpu_logical_map array is statically initialized to 0, which is a valid MPIDR value. To prevent issues with the current implementation, this patch defines an MPIDR_INVALID value, and statically initializes the __cpu_logical_map[] array to it. Entries in the arm_dt_init_cpu_maps() tmp_map array used to stash DT reg properties while parsing DT are initialized with the MPIDR_INVALID value as well for consistency. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 6月, 2013 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR to index the array of pointers where the context is saved and restored. The current approach works as long as the MPIDR can be considered a linear index, so that the pointers array can simply be dereferenced by using the MPIDR[7:0] value. On ARM multi-cluster systems, where the MPIDR may not be a linear index, to properly dereference the stack pointer array, a mapping function should be applied to it so that it can be used for arrays look-ups. This patch adds code in the cpu_{suspend}/cpu_{resume} implementation that relies on shifting and ORing hashing method to map a MPIDR value to a set of buckets precomputed at boot to have a collision free mapping from MPIDR to context pointers. The hashing algorithm must be simple, fast, and implementable with few instructions since in the cpu_resume path the mapping is carried out with the MMU off and the I-cache off, hence code and data are fetched from DRAM with no-caching available. Simplicity is counterbalanced with a little increase of memory (allocated dynamically) for stack pointers buckets, that should be anyway fairly limited on most systems. Memory for context pointers is allocated in a early_initcall with size precomputed and stashed previously in kernel data structures. Memory for context pointers is allocated through kmalloc; this guarantees contiguous physical addresses for the allocated memory which is fundamental to the correct functioning of the resume mechanism that relies on the context pointer array to be a chunk of contiguous physical memory. Virtual to physical address conversion for the context pointer array base is carried out at boot to avoid fiddling with virt_to_phys conversions in the cpu_resume path which is quite fragile and should be optimized to execute as few instructions as possible. Virtual and physical context pointer base array addresses are stashed in a struct that is accessible from assembly using values generated through the asm-offsets.c mechanism. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org>
-
由 Lorenzo Pieralisi 提交于
On ARM SMP systems, cores are identified by their MPIDR register. The MPIDR guidelines in the ARM ARM do not provide strict enforcement of MPIDR layout, only recommendations that, if followed, split the MPIDR on ARM 32 bit platforms in three affinity levels. In multi-cluster systems like big.LITTLE, if the affinity guidelines are followed, the MPIDR can not be considered an index anymore. This means that the association between logical CPU in the kernel and the HW CPU identifier becomes somewhat more complicated requiring methods like hashing to associate a given MPIDR to a CPU logical index, in order for the look-up to be carried out in an efficient and scalable way. This patch provides a function in the kernel that starting from the cpu_logical_map, implement collision-free hashing of MPIDR values by checking all significative bits of MPIDR affinity level bitfields. The hashing can then be carried out through bits shifting and ORing; the resulting hash algorithm is a collision-free though not minimal hash that can be executed with few assembly instructions. The mpidr is filtered through a mpidr mask that is built by checking all bits that toggle in the set of MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry information so they do not contribute to the resulting hash. Pseudo code: /* check all bits that toggle, so they are required */ for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++) mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0)); /* * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr * fls() returns the last bit set in a word, 0 if none * ffs() returns the first bit set in a word, 0 if none */ fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0; fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0; fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0; ls0 = fls(mpidr_mask[7:0]); ls1 = fls(mpidr_mask[15:8]); ls2 = fls(mpidr_mask[23:16]); bits0 = ls0 - fs0; bits1 = ls1 - fs1; bits2 = ls2 - fs2; aff0_shift = fs0; aff1_shift = 8 + fs1 - bits0; aff2_shift = 16 + fs2 - (bits0 + bits1); u32 hash(u32 mpidr) { u32 l0, l1, l2; u32 mpidr_masked = mpidr & mpidr_mask; l0 = mpidr_masked & 0xff; l1 = mpidr_masked & 0xff00; l2 = mpidr_masked & 0xff0000; return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift); } The hashing algorithm relies on the inherent properties set in the ARM ARM recommendations for the MPIDR. Exotic configurations, where for instance the MPIDR values at a given affinity level have large holes, can end up requiring big hash tables since the compression of values that can be achieved through shifting is somewhat crippled when holes are present. Kernel warns if the number of buckets of the resulting hash table exceeds the number of possible CPUs by a factor of 4, which is a symptom of a very sparse HW MPIDR configuration. The hash algorithm is quite simple and can easily be implemented in assembly code, to be used in code paths where the kernel virtual address space is not set-up (ie cpu_resume) and instruction and data fetches are strongly ordered so code must be compact and must carry out few data accesses. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Colin Cross <ccross@android.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Amit Kucheria <amit.kucheria@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NShawn Guo <shawn.guo@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org>
-
- 17 6月, 2013 4 次提交
-
-
由 Jonathan Austin 提交于
Without an MMU it is possible for userspace programs to start executing code in places that they have no business executing. The MPU allows some level of protection against this. This patch protects the vectors page from access by userspace processes. Userspace tasks that dereference a null pointer are already protected by an svc at 0x0 that kills them. However when tasks use an offset from a null pointer (eg a function in a null struct) they miss this carefully placed svc and enter the exception vectors in user mode, ending up in the kernel. This patch causes programs that do this to receive a SEGV instead of happily entering the kernel in user-mode, and hence avoid a 'Bad Mode' panic. As part of this change it is necessary to make sigreturn happen via the stack when there is not an sa_restorer function. This change is invisible to userspace, and irrelevant to code compiled using a uClibc toolchain, which always uses an sa_restorer function. Because we don't get to remap the vectors in !MMU kuser_helpers are not in a defined location, and hence aren't usable. This means we don't need to worry about keeping them accessible from PL0 Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org> CC: Catalin Marinas <catalin.marinas@arm.com>
-
由 Simon Baatz 提交于
Commit f8b63c18 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: NSimon Baatz <gmbnomis@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When scheduling an mm on a CPU where it hasn't previously been used, we flush the icache on that CPU so that any code loaded previously on a different core can be safely executed. For cores with hardware broadcasting of cache maintenance operations, this is clearly unnecessary, since the inner-shareable invalidation in __sync_icache_dcache will affect all CPUs. This patch conditionalises the icache flush in switch_mm based on cache_ops_need_broadcast(). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NAlbin Tonnerre <albin.tonnerre@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
An exclusive store instruction may fail for reasons other than lock contention (e.g. a cache eviction during the critical section) so, in line with other architectures using similar exclusive instructions (alpha, mips, powerpc), retry the trylock operation if the lock appears to be free but the strex reported failure. Reported-by: NTony Thompson <anthony.thompson@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 6月, 2013 7 次提交
-
-
由 Jonathan Austin 提交于
The MPU initialisation on the primary core is performed in two stages, one minimal stage to ensure the CPU can boot and a second one after sanity_check_meminfo. As the memory configuration is known by the time we boot secondary cores only a single step is necessary, provided the values for DRSR are passed to secondaries. This patch implements this arrangement. The configuration generated for the MPU regions is made available to the secondary core, which can then use the asm MPU intialisation code to program a complete region configuration. This is necessary for SMP configurations without an MMU, as the MPU initialisation is the only way to ensure that memory is specified as 'shared'. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Nicolas Pitre <nico@linaro.org>
-
由 Jonathan Austin 提交于
This patch adds initial support for using the MPU, which is necessary for SMP operation on PMSAv7 processors because it is the only way to ensure memory is shared. This is an initial patch and full SMP support is added later in this series. The setup of the MPU is performed in a way analagous to that for the MMU: Very early initialisation before the C environment is brought up, followed by a sanity check and more complete initialisation in C. This patch provides the simplest possible memory region configuration: MPU_PROBE_REGION: Reserved for probing MPU details, not enabled MPU_BG_REGION: A 'background' region that specifies all memory strongly ordered MPU_RAM_REGION: A single shared, cacheable, normal region for the valid RAM. In this early initialisation code we simply map the whole of the address space with the BG_REGION and (at least) the kernel with the RAM_REGION. The MPU has region alignment constraints that require us to round past the end of the kernel. As region 2 has a higher priority than region 1, it overrides the strongly- ordered behaviour for RAM only. Subsequent patches will add more complete initialisation from the C-world and support for bringing up secondary CPUs. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC: Hyok S. Choi <hyok.choi@samsung.com>
-
由 Jonathan Austin 提交于
This commit adds definitions relevant to the ARM v7 PMSA compliant MPU. The register layouts and region configuration data is made accessible to asm as well as C-code so that it can be used in early bring-up of the MPU. The mpu region information structs assume that the properties for the I/D side are the same, though the implementation could be trivially extended for future platforms where this is no-longer true. The MPU_*_REGION defines are used for the basic, static MPU region setup. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
由 Jonathan Austin 提交于
This patch adds the following definitions relevant to the PMSA: Add SCTLR bit 17, (CR_BR - Background Region bit) to the list of CR_* bitfields. This bit determines whether to use the architecturally defined memory map Add the MPUIR to the available registers when using read_cpuid macro. The MPUIR is the MPU type register. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> CC:"Uwe Kleine-König" <u.kleine-koenig@pengutronix.de>
-
由 Jonathan Austin 提交于
Since the merging of Will's tlb-ops branch, specifically 89c7e4b8 (ARM: 7661/1: mm: perform explicit branch predictor maintenance when required), building SMP without CONFIG_MMU has been broken. The local_flush_bp_all function is only called for operations related to changing the kernel's view of memory and ASID rollover - both of which are irrelevant to an !MMU kernel. This patch adds a stub local_flush_bp_all() function to the other tlb maintenance stubs and restores the ability to build an SMP !MMU kernel. Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
cpu_switch_mm is a logical nop on nommu systems, so define it as such when !CONFIG_MMU. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
nommu platforms do not perform address translation and therefore clearly don't have TLBs. However, some SMP code assumes the presence of the TLB flushing routines and will therefore fail to compile for a nommu system. This patch defines dummy local_* TLB operations and #defines tlb_ops_need_broadcast() as 0, therefore causing the usual ARM SMP TLB operations to call the local variants instead. Signed-off-by: NWill Deacon <will.deacon@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Nicolas Pitre <nico@linaro.org>
-
- 07 6月, 2013 1 次提交
-
-
由 Mark Rutland 提交于
Switching between reading the virtual or physical counters is problematic, as some core code wants a view of time before we're fully set up. Using a function pointer and switching the source after the first read can make time appear to go backwards, and having a check in the read function is an unfortunate block on what we want to be a fast path. Instead, this patch makes us always use the virtual counters. If we're a guest, or don't have hyp mode, we'll use the virtual timers, and as such don't care about CNTVOFF as long as it doesn't change in such a way as to make time appear to travel backwards. As the guest will use the virtual timers, a (potential) KVM host must use the physical timers (which can wake up the host even if they fire while a guest is executing), and hence a host must have CNTVOFF set to zero so as to have a consistent view of time between the physical timers and virtual counters. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Rob Herring <rob.herring@calxeda.com>
-
- 06 6月, 2013 2 次提交
-
-
由 Peter Zijlstra 提交于
Since the introduction of preemptible mmu_gather TLB fast mode has been broken. TLB fast mode relies on there being absolutely no concurrency; it frees pages first and invalidates TLBs later. However now we can get concurrency and stuff goes *bang*. This patch removes all tlb_fast_mode() code; it was found the better option vs trying to patch the hole by entangling tlb invalidation with the scheduler. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Reported-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Will Deacon 提交于
__my_cpu_offset is non-volatile, since we want its value to be cached when we access several per-cpu variables in a row with preemption disabled. This means that we rely on preempt_{en,dis}able to hazard with the operation via the barrier() macro, so that we can't end up migrating CPUs without reloading the per-cpu offset. Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile asm block as a side-effect, and will happily re-order it before other memory clobbers (including those in prempt_disable()) and cache the value. This has been observed to break the cmpxchg logic in the slub allocator, leading to livelock in kmem_cache_alloc in mainline kernels. This patch adds a dummy memory input operand to __my_cpu_offset, forcing it to be ordered with respect to the barrier() macro. Cc: <stable@vger.kernel.org> Cc: Rob Herring <rob.herring@calxeda.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 6月, 2013 3 次提交
-
-
由 Catalin Marinas 提交于
The patch adds support for THP (transparent huge pages) to LPAE systems. When this feature is enabled, the kernel tries to map anonymous pages as 2MB sections where possible. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [steve.capper@linaro.org: symbolic constants used, value of PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h, added PROT_NONE support.] Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
This patch adds support for hugetlbfs based on the x86 implementation. It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt for usage). The 64K pages configuration is not supported (section size is 512MB in this case). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [steve.capper@linaro.org: symbolic constants replace numbers in places. Split up into multiple files, to simplify future non-LPAE support, removed huge_pmd_share code, as this is very rarely executed, Added PROT_NONE support]. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
由 Steve Capper 提交于
For 3 levels of paging the PTE_EXT_NG bit will be set for user address ptes that are written to a page table but not for ptes created with mk_pte. This can cause some comparison tests made by pte_same to fail spuriously and lead to other problems. To correct this behaviour, we mask off PTE_EXT_NG for any pte that is present before running the comparison. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
- 30 5月, 2013 7 次提交
-
-
由 Will Deacon 提交于
For 2-level page tables, PTE_HWTABLE_PTRS describes the offset between Linux PTEs and hardware PTEs. On LPAE, there is no distinction (since we have 64-bit descriptors with plenty of space) so PTE_HWTABLE_PTRS should be 0. Unfortunately, it is wrongly defined as PTRS_PER_PTE, meaning that current pte table flushing is off by a page. Luckily, all current LPAE implementations are SMP, so the hardware walker can snoop L1. This patch fixes the broken definition. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
On LPAE machines, PHYS_OFFSET evaluates to a phys_addr_t and this type is inherited by the PHYS_PFN_OFFSET definition as well. Consequently, the kernel build emits warnings of the form: init/main.c: In function 'start_kernel': init/main.c:588:7: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'phys_addr_t' [-Wformat] This patch fixes this warning by pinning down the PFN type to unsigned long. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch moves the TTBR1 offset calculation and the T1SZ calculation out of the TTB setup assembly code. This should not affect functionality in any way, but improves code readability as well as readability of subsequent patches in this series. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and the LPAE version of cpu_set_reserved_ttbr0() to use these instead. In the process, we also fix these functions to correctly handle cases where the physical address lies beyond the 4G limit of 32-bit addressing. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch modifies the switch_mm() processor functions to use phys_addr_t. On LPAE systems, we now honor the upper 32-bits of the physical address that is being passed in, and program these into TTBR as expected. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> [will: fixed up conflict in 3-level switch_mm with big-endian changes] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing unsigned long math truncates the mask at the 32-bits. This clearly does bad things on PAE systems. This patch fixes this problem by defining these masks as signed quantities. We then rely on sign extension to do the right thing. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 5月, 2013 1 次提交
-
-
由 Laura Abbott 提交于
Several of the ioremap functions use unsigned long in places resulting in truncation if physical addresses greater than 4G are passed in. Change the types of the functions and the callers accordingly. Cc: Krzysztof Halasa <khc@pm.waw.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 5月, 2013 2 次提交
-
-
由 Jon Medhurst 提交于
Add a new 'smp_init' hook to machine_desc so platforms can specify a function to be used to setup smp ops instead of having a statically defined value. The hook must return true when smp_ops are initialized. If false the static mdesc->smp_ops will be used by default. Add the definition of "bool" by including the linux/types.h file to asm/mach/arch.h and make it self-contained. Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NNicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Stefano Stabellini 提交于
Rename virt_smp_ops to psci_smp_ops and move them to arch/arm/kernel/psci_smp.c. Remove mach-virt/platsmp.c, now unused. Compile psci_smp if CONFIG_ARM_PSCI and CONFIG_SMP. Add a cpu_die smp_op based on psci_ops.cpu_off. Initialize PSCI before setting smp_ops in setup_arch. If PSCI is available on the platform, prefer psci_smp_ops over the platform smp_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NWill Deacon <will.deacon@arm.com> CC: arnd@arndb.de CC: marc.zyngier@arm.com CC: linux@arm.linux.org.uk CC: nico@linaro.org CC: rob.herring@calxeda.com
-
- 17 5月, 2013 1 次提交
-
-
由 Uwe Kleine-König 提交于
On v7-M the extended cpuid registers are not available from CP15 but they are memory mapped in the System Control Space. There isn't an equivalent available for CPUID_{CACHETYPE,TCM,TLBTYPE,MPIDR}. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
-
- 16 5月, 2013 1 次提交
-
-
由 Arnd Bergmann 提交于
In OABI configurations, some uses of the do_div function cause gcc to run out of registers. To work around that, we can force the use of the out-of-line version for configurations that build a OABI kernel. Without this patch, building netx_defconfig results in: net/core/pktgen.c: In function 'pktgen_if_show': net/core/pktgen.c:682:2775: error: can't find a register in class 'GENERAL_REGS' while reloading 'asm' net/core/pktgen.c:682:3153: error: can't find a register in class 'GENERAL_REGS' while reloading 'asm' net/core/pktgen.c:682:2775: error: 'asm' operand has impossible constraints net/core/pktgen.c:682:3153: error: 'asm' operand has impossible constraints Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 5月, 2013 1 次提交
-
-
由 Jaccon Bastiaansen 提交于
The implementation of cmpxchg64() for the ARM v6 and v7 architecture casts parameter 2 and 3 (the old and new 64bit values) to an unsigned long before calling the atomic_cmpxchg64() function. This clears the top 32 bits of the old and new values, resulting in the wrong values being compare-exchanged. Luckily, this only appears to be used for 64-bit sched_clock, which we don't (yet) have on ARM. This bug was introduced by commit 3e0f5a15 ("ARM: 7404/1: cmpxchg64: use atomic64 and local64 routines for cmpxchg64"). Cc: <stable@vger.kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJaccon Bastiaansen <jaccon.bastiaansen@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 4月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
ARM processors with LPAE enabled use 3 levels of page tables, with an entry in the top level (pgd) covering 1GB of virtual space. Because of the branch relocation limitations on ARM, the loadable modules are mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared between kernel modules and user space. If free_pgtables() is called with the default ceiling 0, free_pgd_range() (and subsequently called functions) also frees the page table shared between user space and kernel modules (which is normally handled by the ARM-specific pgd_free() function). This patch changes defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE is enabled. Note that the pgd_free() function already checks the presence of the shared pmd page allocated by pgd_alloc() and frees it, though with ceiling 0 this wasn't necessary. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [3.3+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 4月, 2013 3 次提交
-
-
由 Marc Zyngier 提交于
We use the vfp_host pointer to store the host VFP context, should the guest start using VFP itself. Actually, we can use this pointer in a more generic way to store CPU speficic data, and arm64 is using it to dump the whole host state before switching to the guest. Simply rename the vfp_host field to host_cpu_context, and the corresponding type to kvm_cpu_context_t. No change in functionnality. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
Most of the capabilities are common to both arm and arm64, but we still need to handle the exceptions. Introduce kvm_arch_dev_ioctl_check_extension, which both architectures implement (in the 32bit case, it just returns 0). Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
Now that we have the necessary infrastructure to boot a hotplugged CPU at any point in time, wire a CPU notifier that will perform the HYP init for the incoming CPU. Note that this depends on the platform code and/or firmware to boot the incoming CPU with HYP mode enabled and return to the kernel by following the normal boot path (HYP stub installed). Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-