- 27 5月, 2011 1 次提交
-
-
由 Russell King 提交于
pmd_off() has only one user, so lets consolidate this into its only user. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 6 次提交
-
-
由 Will Deacon 提交于
Now that ASID 0 is no longer used as a reserved value, allow it to be allocated to tasks. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
On ARMv7 CPUs that cache first level page table entries (like the Cortex-A15), using a reserved ASID while changing the TTBR or flushing the TLB is unsafe. This is because the CPU may cache the first level entry as the result of a speculative memory access while the reserved ASID is assigned. After the process owning the page tables dies, the memory will be reallocated and may be written with junk values which can be interpreted as global, valid PTEs by the processor. This will result in the TLB being populated with bogus global entries. This patch avoids the use of a reserved context ID in the v7 switch_mm and ASID rollover code by temporarily using the swapper_pg_dir pointed at by TTBR1, which contains only global entries that are not tagged with ASIDs. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel mappings can be used exclusively on v6 and v7 cores where they are needed. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The v6 and v7 implementations of flush_kern_dcache_area do not align the passed MVA to the size of a cacheline in the data cache. If a misaligned address is used, only a subset of the requested area will be flushed. This has been observed to cause failures in SMP boot where the secondary_data initialised by the primary CPU is not cacheline aligned, causing the secondary CPUs to read incorrect values for their pgd and stack pointers. This patch ensures that the base address is cacheline aligned before flushing the d-cache. Cc: <stable@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
sanity_check_meminfo walks over the registered memory banks and attempts to split banks across lowmem and highmem when they would otherwise overlap with the vmalloc space. When SPARSEMEM is used, there are two potential problems that occur when the virtual address of the start of a bank is equal to vmalloc_min. 1.) The end of lowmem is calculated as __pa(vmalloc_min - 1) + 1. In the above scenario, this will give the end address of the previous bank, rather than the actual bank we are interested in. This value is later used as the memblock limit and artificially restricts the total amount of available memory. 2.) The checks to determine whether or not a bank belongs to highmem or not only check if __va(bank->start) is greater or less than vmalloc_min. In the case that it is equal, the bank is incorrectly treated as lowmem, which hoses the vmalloc area. This patch fixes these two problems by checking whether the virtual start address of a bank is >= vmalloc_min and then calculating lowmem_end by finding the virtual end address of the highest lowmem bank. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
In commit eb33575c ("[ARM] Double check memmap is actually valid with a memmap has unexpected holes V2"), a new function, memmap_valid_within, was introduced to mmzone.h so that holes in the memmap which pass pfn_valid in SPARSEMEM configurations can be detected and avoided. The fix to this problem checks that the pfn <-> page linkages are correct by calculating the page for the pfn and then checking that page_to_pfn on that page returns the original pfn. Unfortunately, in SPARSEMEM configurations, this results in reading from the page flags to determine the correct section. Since the memmap here has been freed, junk is read from memory and the check is no longer robust. In the best case, reading from /proc/pagetypeinfo will give you the wrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung CPUs. Furthermore, ioremap implementations that use pfn_valid to disallow the remapping of normal memory will break. This patch allows architectures to provide their own pfn_valid function instead of using the default implementation used by sparsemem. The architecture-specific version is aware of the memmap state and will return false when passed a pfn for a freed page within a valid section. Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 5月, 2011 2 次提交
-
-
由 Peter Zijlstra 提交于
Fold all the mmu_gather rework patches into one for submission Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
Architectures that implement their own show_mem() function did not pass the filter argument to show_free_areas() to appropriately avoid emitting the state of nodes that are disallowed in the current context. This patch now passes the filter argument to show_free_areas() so those nodes are now avoided. This patch also removes the show_free_areas() wrapper around __show_free_areas() and converts existing callers to pass an empty filter. ia64 emits additional information for each node, so skip_free_areas_zone() must be made global to filter disallowed nodes and it is converted to use a nid argument rather than a zone for this use case. Signed-off-by: NDavid Rientjes <rientjes@google.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Helge Deller <deller@gmx.de> Cc: James Bottomley <jejb@parisc-linux.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 5月, 2011 1 次提交
-
-
由 Grant Likely 提交于
If a dtb is passed to the kernel then the kernel needs to iterate through compiled-in mdescs looking for one that matches and move the dtb data to a safe location before it gets accidentally overwritten by the kernel. This patch creates a new function, setup_machine_fdt() which is analogous to the setup_machine_atags() created in the previous patch. It does all the early setup needed to use a device tree machine description. v5: - Print warning with neither dtb nor atags are passed to the kernel - Fix bug in setting of __machine_arch_type to the selected machine, not just the last machine in the list. Reported-by: NTixy <tixy@yxit.co.uk> - Copy command line directly into boot_command_line instead of cmd_line v4: - Dump some output when a matching machine_desc cannot be found v3: - Added processing of reserved list. - Backed out the v2 change that copied instead of reserved the dtb. dtb is reserved again and the real problem was fixed by using alloc_bootmem_align() for early allocation of RAM for unflattening the tree. - Moved cmd_line and initrd changes to earlier patch to make series bisectable. v2: Changed to save the dtb by copying into an allocated buffer. - Since the dtb will very likely be passed in the first 16k of ram where the interrupt vectors live, memblock_reserve() is insufficient to protect the dtb data. [based on work originally written by Jeremy Kerr <jeremy.kerr@canonical.com>] Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 21 5月, 2011 1 次提交
-
-
由 saeed bishara 提交于
Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 5月, 2011 1 次提交
-
-
由 saeed bishara 提交于
when cache_is_vipt_nonaliasing(), we always have pte_exec() true at the end of this function, so no need for the additional check. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSaeed Bishara <saeed@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 5月, 2011 2 次提交
-
-
由 Will Deacon 提交于
The SPARSEMEM code allocates memmap entries only for sections which are present (i.e. those which contain some valid memory). The membank checks in free_unused_memmap do not take this into account and can incorrectly attempt to free memory which is not allocated, resulting in a BUG() in the bootmem code. However, if memory is configured as follows: |<----section---->|<----hole---->|<----section---->| +--------+--------+--------------+--------+--------+ | bank 0 | unused | | bank 1 | unused | +--------+--------+--------------+--------+--------+ where a bank only occupies part of a section, the memmap allocated for the remainder of the section *can* be freed. This patch modifies the checks in free_unused_memmap so that only valid memmap entries are considered for removal. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than each platform providing its own function to adjust the zone sizes, use the new ARM_DMA_ZONE_SIZE definition to perform this adjustment. This ensures that the actual DMA zone size and the ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS definitions are consistent with each other, and moves this complexity out of the platform code. Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 5月, 2011 1 次提交
-
-
由 Grant Likely 提交于
Add some basic empty infrastructure for DT support on ARM. v5: - Fix off-by-one error in size calculation of initrd - Stop mucking with cmd_line, and load command line from dt into boot_command_line instead which matches the behaviour of ATAGS booting v3: - moved cmd_line export and initrd setup to this patch to make the series bisectable. - switched to alloc_bootmem_align() for allocation when unflattening the device tree. memblock_alloc() was not the right interface. Signed-off-by: NJeremy Kerr <jeremy.kerr@canonical.com> Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 04 5月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
The Marvell PJ4 is ARMv7 capable, so we don't support it in ARMv6 mode anymore. Signed-off-by: NNicolas Pitre <nico@fluxnic.net> Acked-by: NSaeed Bishara <saeed.bishara@gmail.com> Acked-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
-
- 28 4月, 2011 1 次提交
-
-
由 Ben Hutchings 提交于
gas used to accept (and ignore?) .size directives which referred to undefined symbols, as this does. In binutils 2.21 these are treated as fatal errors. Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NEric Miao <eric.y.miao@gmail.com>
-
- 14 4月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 4月, 2011 1 次提交
-
-
由 Russell King 提交于
CONFIG_PM is now set whenever we support either runtime PM in addition to suspend and hibernate. This causes build errors when runtime PM is enabled on a platform, but the CPU does not have the appropriate support for suspend. So, switch this code to use CONFIG_PM_SLEEP rather than CONFIG_PM to allow runtime PM to be enabled without causing build errors. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 25 3月, 2011 1 次提交
-
-
由 David Rientjes 提交于
Commit ddd588b5 ("oom: suppress nodes that are not allowed from meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which resulted in build warnings on all architectures that implement their own versions of show_mem(): lib/lib.a(show_mem.o): In function `show_mem': show_mem.c:(.text+0x1f4): multiple definition of `show_mem' arch/sparc/mm/built-in.o:(.text+0xd70): first defined here The fix is to remove __show_mem() and add its argument to show_mem() in all implementations to prevent this breakage. Architectures that implement their own show_mem() actually don't do anything with the argument yet, but they could be made to filter nodes that aren't allowed in the current context in the future just like the generic implementation. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reported-by: NJames Bottomley <James.Bottomley@hansenpartnership.com> Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 3月, 2011 1 次提交
-
-
由 Will Deacon 提交于
On the r2p* and r3p* versions of the Cortex-A9, a speculative memory access may cause a page table walk which starts prior to an ASID switch but completes afterwards. This can populate the micro-TLB with a stale entry which may be hit with the new ASID. This workaround places two dsb instructions in the mm switching code so that no page table walks can cross the ASID switch. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 3月, 2011 1 次提交
-
-
由 Santosh Shilimkar 提交于
PL310 implements the Clean & Invalidate by Way L2 cache maintenance operation (offset 0x7FC). This operation runs in background so that PL310 can handle normal accesses while it is in progress. Under very rare circumstances, due to this erratum, write data can be lost when PL310 treats a cacheable write transaction during a Clean & Invalidate by Way operation. Workaround: Disable Write-Back and Cache Linefill (Debug Control Register) Clean & Invalidate by Way (0x7FC) Re-enable Write-Back and Cache Linefill (Debug Control Register) This patch also removes any OMAP dependency on PL310 Errata's Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 3月, 2011 1 次提交
-
-
由 Richard Zhao 提交于
Move to SOC_SOC_IMX3X. Leave ARCH_MX31/35 definitions there, in case some place prevent multi-soc single image. Signed-off-by: NRichard Zhao <richard.zhao@freescale.com> Acked-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de>
-
- 24 2月, 2011 3 次提交
-
-
由 Russell King 提交于
Move L1_CACHE_SHIFT related options together, rather than spreading them across two separate Kconfig files. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
In commit e616c591, highmem support was deactivated for SMP platforms without hardware TLB ops broadcast because usage of kmap_high_get() requires that IRQs be disabled when kmap_lock is locked which is incompatible with the IPI mechanism used by the software TLB ops broadcast invoked through flush_all_zero_pkmaps(). The reason for kmap_high_get() is to ensure that the currently kmap'd page usage count does not decrease to zero while we're using its existing virtual mapping in an atomic context. With a VIVT cache this is essential to do due to cache coherency issues, but with a VIPT cache this is only an optimization so not to pay the price of establishing a second mapping if an existing one can be used. However, on VIPT platforms without hardware TLB maintenance we can give up on that optimization in order to be able to use highmem. From ARMv7 onwards the TLB ops are broadcasted in hardware, so let's disable ARCH_NEEDS_KMAP_HIGH_GET only when CONFIG_SMP and CONFIG_CPU_TLB_V6 are defined. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NSaeed Bishara <saeed.bishara@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Achieve better usage of the DMA coherent region by doing top-down allocation rather than bottom up. If we ask for a 128kB allocation, this will be aligned to 128kB and satisfied from the very bottom address. If we then ask for a 600kB allocation, this will be aligned to 1MB, and we will have a 896kB hole. Performing top-down allocation resolves this by allocating the 128kB at the very top, and then the 600kB can come in below it without any unnecessary wastage. This problem was reported by Janusz Krzysztofik, who had 2 x 128kB + 1 x 640kB allocations which wouldn't fit into 1MB. Tested-by: NJanusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 2月, 2011 1 次提交
-
-
由 Russell King 提交于
This adds core support for saving and restoring CPU coprocessor registers for suspend/resume support. This contains support for suspend with ARM920, ARM926, SA11x0, PXA25x, PXA27x, PXA3xx, V6 and V7 CPUs. Tested on Assabet and Tegra 2. Tested-by: NColin Cross <ccross@android.com> Tested-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 2月, 2011 3 次提交
-
-
由 Kukjin Kim 提交于
This patch changes the Kconfig and Makefile for the new ARCH_EXYNOS4. It also updates arch/arm/Kconfig, Makeifile and arch/arm/mm/Kconfig to include support for the new ARCH_EXYNOS4. Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
-
由 Russell King 提交于
Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 2月, 2011 2 次提交
-
-
由 Will Deacon 提交于
On versions of the Cortex-A9 prior to r3p0, an interrupted ICIALLUIS operation may prevent the completion of a following broadcasted operation if the second operation is received by a CPU before the ICIALLUIS has completed, potentially leading to corrupted entries in the cache or TLB. This workaround sets a bit in the diagnostic register of the Cortex-A9, causing CP15 maintenance operations to be uninterruptible. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Srinidhi Kasagar 提交于
The effect of cache sync operation is to drain the store buffer and wait for all internal buffers to be empty. In normal conditions, store buffer is able to merge the normal memory writes within its 32-byte data buffers. Due to this erratum present in r3p0, the effect of cache sync operation on the store buffer still remains when the operation completes. This means that the store buffer is always asked to drain and this prevents it from merging any further writes. This can severely affect performance on the write traffic esp. on Normal memory NC one. The proposed workaround is to replace the normal offset of cache sync operation(0x730) by another offset targeting an unmapped PL310 register 0x740. Signed-off-by: Nsrinidhi kasagar <srinidhi.kasagar@stericsson.com> Acked-by: NLinus Walleij <linus.walleij@stericsson.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2011 3 次提交
-
-
由 Will Deacon 提交于
The unsigned long datatype is not sufficient for mapping physical addresses >= 4GB. This patch ensures that the phys_addr_t datatype is used to represent physical addresses when converting from a PFN. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
For the Kernel to support 2 level and 3 level page tables, physical addresses (and also page table entries) need to be 32 or 64-bits depending upon the configuration. This patch uses the %08llx conversion specifier for physical addresses and page table entries, ensuring that they are cast to (long long) so that common code can be used regardless of the datatype widths. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
With LPAE we no longer have software bits in a separate Linux PTE and the early_pte_alloc() function should pass PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE to early_alloc() to avoid allocating extra memory. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 2月, 2011 2 次提交
-
-
由 Russell King 提交于
SWP emulation requires that CPU domain support is disabled in order to work safely. Make that explicit in the kernel configuration to prevent illegal configurations being generated. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
OMAP2 (armv6) and MX3 turn off support for the V6K instructions, which when they include support for SMP kernels means that the resulting kernel is unsafe on SMP and can result in corrupted filesystems as we end up using unsafe bitops. Re-enable the use of V6K instructions on such kernels, and let such kernels running on V6 CPUs eat undefined instruction faults which will be much safer than filesystem corruption. Next merge window we can fix this properly (as it requires a much bigger set of changes.) Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 2月, 2011 2 次提交
-
-
由 Russell King 提交于
Limit DMA_CACHE_RWFO to only v6k SMP CPUs - V6 CPUs aren't SMP capable, so the read/write for ownership work-around doesn't apply to them. Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Now that we build a v6+v6k+v7 kernel with -march=armv6k for everything, we don't need to disable swp emulation to work around the build problem with OMAP. Tested-by: NTony Lindgren <tony@atomide.com> Tested-by: NSourav Poddar <sourav.poddar@ti.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-