- 23 2月, 2014 3 次提交
-
-
由 Andrew Lunn 提交于
Kirkwood, which uses the Feroceon L2 cache controller will soon be moving into mach-mvebu. Allow the cache controller to be built in this situation. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Andrew Lunn 提交于
Instantiate the L2 cache from DT. Indicate in DT where the cache control register is so that it is possible to enable/disable write through on the CPU. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Andrew Lunn 提交于
With the gradual move to DT, kirkwood has become a lot less dependent on plat-orion. cache-feroceon-l2.h is the last dependency. Move it out so we can drop plat-orion when building DT only kirkwood boards. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 28 1月, 2014 1 次提交
-
-
由 Ben Peddell 提交于
Commit 65939301 (arm: set initrd_start/initrd_end for fdt scan) caused the FDT initrd_start and initrd_end to override the phys_initrd_start and phys_initrd_size set by the initrd= kernel parameter. With this patch initrd_start and initrd_end will be overridden if phys_initrd_start and phys_initrd_size are set by the kernel initrd= parameter. Fixes: 65939301 (arm: set initrd_start/initrd_end for fdt scan) Signed-off-by: NBen Peddell <klightspeed@killerwolves.net> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 1月, 2014 2 次提交
-
-
由 Santosh Shilimkar 提交于
Switch to memblock interfaces for early memory allocator instead of bootmem allocator. No functional change in beahvior than what it is in current code from bootmem users points of view. Archs already converted to NO_BOOTMEM now directly use memblock interfaces instead of bootmem wrappers build on top of memblock. And the archs which still uses bootmem, these new apis just fallback to exiting bootmem APIs. Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Paul Walmsley <paul@pwsan.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tejun Heo <tj@kernel.org> Cc: Tony Lindgren <tony@atomide.com> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Commit 4b59e6c4 ("mm, show_mem: suppress page counts in non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to suppress PFN walks on large memory machines. Commit c78e9363 ("mm: do not walk all of system memory during show_mem") avoided a PFN walk in the generic show_mem helper which removes the requirement for SHOW_MEM_FILTER_PAGE_COUNT in that case. This patch removes PFN walkers from the arch-specific implementations that report on a per-node or per-zone granularity. ARM and unicore32 still do a PFN walk as they report memory usage on each bank which is a much finer granularity where the debugging information may still be of use. As the remaining arches doing PFN walks have relatively small amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT. [akpm@linux-foundation.org: fix parisc] Signed-off-by: NMel Gorman <mgorman@suse.de> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: James Bottomley <jejb@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 1月, 2014 1 次提交
-
-
由 Russell King 提交于
This reverts commit 787b0d5c since it is no longer required after 7909/1 was applied, and it causes build regressions when ARM_PATCH_PHYS_VIRT is disabled and DMA_ZONE is enabled. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 12月, 2013 7 次提交
-
-
由 Will Deacon 提交于
The ASID allocator has to deal with some pretty horrible behaviours by the CPU, so expand on some of the comments in there so I remember why we can never allocate ASID zero to a userspace task. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Since we only clear entries in the ASID bitmap on a rollover event, the bitmap tends to consist of a block of consecutive set bits followed by a block of consecutive clear bits. The exception to this rule is for ASIDs which have been carried over from a previous generation, but these are bound by the number of CPUs. This patch optimises our bitmap searching strategy, so that we search from the last successful allocation, rather than search from index 1 each time we allocate a new ASID. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
With the new ASID allocation algorithm, active ASIDs at the time of a rollover event will be marked as reserved, so active mm_structs can continue to operate with the same ASID as before. This in turn means that we don't need to worry about allocating a new ASID to an mm that is currently active (installed in TTBR0). Since updating the pgd and ASID is atomic on LPAE systems (by virtue of the two being fields in the same hardware register), we can dispose of the reserved TTBR0 and rely on whatever tables we currently have live. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Steven Capper 提交于
When given a compound high page, __flush_dcache_page will only flush the first page of the compound page repeatedly rather than the entire set of constituent pages. This error was introduced by: 0b19f933 ARM: mm: Add support for flushing HugeTLB pages. This patch corrects the logic such that all constituent pages are now flushed. Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make pgd allocation retry on failure; we really need this to succeed otherwise fork() can trigger OOMs. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sebastian Hesselbarth 提交于
This adds support for the Marvell Tauros3 cache controller which is compatible with pl310 cache controller but broadcasts L1 cache operations to L2 cache. While updating the binding documentation, clean up the list of possible compatibles. Also reorder driver compatibles to allow non-ARM derivated to be compatible to ARM cache controller compatibles. Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
Set-associative caches on all v7 implementations map the index bits to physical addresses LSBs and tag bits to MSBs. As the last level of cache on current and upcoming ARM systems grows in size, this means that under normal DRAM controller configurations, the current v7 cache flush routine using set/way operations triggers a DRAM memory controller precharge/activate for every cache line writeback since the cache routine cleans lines by first fixing the index and then looping through ways (index bits are mapped to lower physical addresses on all v7 cache implementations; this means that, with last level cache sizes in the order of MBytes, lines belonging to the same set but different ways map to different DRAM pages). Given the random content of cache tags, swapping the order between indexes and ways loops do not prevent DRAM pages precharge and activate cycles but at least, on average, improves the chances that either multiple lines hit the same page or multiple lines belong to different DRAM banks, improving throughput significantly. This patch swaps the inner loops in the v7 cache flushing routine to carry out the clean operations first on all sets belonging to a given way (looping through sets) and then decrementing the way. Benchmarks showed that by swapping the ordering in which sets and ways are decremented in the v7 cache flushing routine, that uses set/way operations, time required to flush caches is reduced significantly, owing to improved writebacks throughput to the DRAM controller. Benchmarks results vary and depend heavily on the last level of cache tag RAM content when cache is cleaned and invalidated, ranging from 2x throughput when all tag RAM entries contain dirty lines mapping to sequential pages of RAM to 1x (ie no improvement) when all tag RAM accesses trigger a DRAM precharge/activate cycle, as the current code implies on most DRAM controller configurations. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 12月, 2013 5 次提交
-
-
由 Russell King 提交于
The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This patch allows the kernel page tables to be dumped via a debugfs file, allowing kernel developers to check the layout of the kernel page tables and the verify the various permissions and type settings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 12月, 2013 2 次提交
-
-
由 Santosh Shilimkar 提交于
Current code is using PHYS_OFFSET to calculate the arm_dma_limit which will lead to wrong calculations in cases where PHYS_OFFSET is updated runtime. So fix the code by using __pv_phys_offset instead of PHYS_OFFSET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Peter reports that OMAP audio broke with the recent fix for these checks, caused by OMAP audio using a 64-bit DMA mask. We should allow 64-bit DMA masks even with 32-bit dma_addr_t if we can be sure the amount of RAM we have won't allow the 32-bit dma_addr_t to overflow. Unfortunately, the checks to detect overflow were not correct. Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 11月, 2013 2 次提交
-
-
由 Russell King 提交于
Commit f6f91b0d (ARM: allow kuser helpers to be removed from the vector page) required two pages for the vectors code. Although the code setting up the initial page tables was updated, the code which allocates page tables for new processes wasn't, neither was the code which tears down the mappings. Fix this. Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: <stable@vger.kernel.org>
-
由 Russell King 提交于
Some buses have negative offsets, which causes the DMA mask checks to falsely fail. Fix this by using the actual amount of memory fitted in the system. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 11月, 2013 3 次提交
-
-
由 Santosh Shilimkar 提交于
Now with dma_mask series merged and max*pfn has consistent meaning on ARM as rest of the arch's thanks to RMK's mega series, lets switch ARM code to NO_BOOTMEM. With NO_BOOTMEM change, now we use memblock allocator to reserve space for crash kernel to have one less dependency with nobootmem allocator wrapper. Tested with both flat memory and sparse (faked) memory models with highmem enabled. Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Grygorii Strashko 提交于
If allowed by call to memblock_allow_resize() - The Memblock core will try to allocate additional memory and rearrange its internal data in case, if there are more then INIT_MEMBLOCK_REGIONS(128) memory regions of any type have been allocated. If this happens before Low memory is mapped (which is done now by map_lowmem()) the system will hang, because the Memblock core will try to operate with virtual addresses which aren't mapped yet. In ARM code, the memblock resizing is allowed (memblock_allow_resize()) from arm_memblock_init() which is called before map_lowmem(), so this may lead to an error as described above. Hence, allow Memblock resizing later during init, from bootmem_init() when all appropriate mappings are ready. Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Santosh Shilimkar 提交于
With commit 26ba47b1 {ARM: 7805/1: mm: change max*pfn to include the physical offset of memory}, the max_pfn already contain PHYS_PFN_OFFSET, so it shouldn't be taken into account again. While at it, use use set_max_mapnr() helper. Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We're going to introduce split page table lock for PMD level. Let's rename existing split ptlock for PTE level to avoid confusion. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: NAlex Thorlton <athorlton@sgi.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: "Eric W . Biederman" <ebiederm@xmission.com> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Dave Jones <davej@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kees Cook <keescook@chromium.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Robin Holt <robinmholt@gmail.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 11月, 2013 3 次提交
-
-
由 Mahesh Sivasubramanian 提交于
LPAE enabled kernels use the 64-bit version of TTBR0 and TTBR1 registers. If we're running an LPAE kernel, fill the upper half of TTBR0 with 0 because we're setting it to the idmap here (the idmap is guaranteed to be < 4Gb) and fully restore TTBR1 instead of just restoring the lower 32 bits. Failure to do so can cause failures on resume from suspend when these registers are only half restored. Signed-off-by: NMahesh Sivasubramanian <msivasub@codeaurora.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Michal Simek 提交于
ECC policy can be applied to the whole system when this bit is implemented by SoC vendor (IMP - bit 9 - in L1 page table entry format). When this bit is not implemented by SoC vendor it doesn't mean that system has no other way how to do ECC. This patch ensures to show this message only when ECC is requested via cmd line ecc=on and runs on appropriate ARM core. Signed-off-by: NMichal Simek <michal.simek@xilinx.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The 0-day kernel build robot found this new warning: arch/arm/mm/nommu.c:303:17: warning: 'struct proc_info_list' declared inside parameter list [enabled by default] arch/arm/mm/nommu.c:303:17: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default] Fix it by including the appropriate header. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 11月, 2013 2 次提交
-
-
由 Thierry Reding 提交于
No-MMU configurations currenty fail to build because they are missing the early_paging_init() symbol. Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marc Zyngier 提交于
The exception handling code fails to clear the IT state, potentially leading to incorrect execution of the fixup if the size of the IT block is more than one. Let fixup_exception do the IT sanitizing if a fixup has been found, and restore CPSR from the stack when returning from a data abort. Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 10月, 2013 2 次提交
-
-
由 Santosh Shilimkar 提交于
Most of the kernel code assumes that max*pfn is maximum pfns because the physical start of memory is expected to be PFN0. Since this assumption is not true on ARM architectures, the meaning of max*pfn is number of memory pages. This is done to keep drivers happy which are making use of of these variable to calculate the dma bounce limit using dma_mask. Now since we have a architecture override possibility for DMAable maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We need to start treating DMA masks as something which is specific to the bus that the device resides on, otherwise we're going to hit all sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit systems, where memory is offset from PFN 0. In order to start doing this, we convert the DMA mask to a PFN using the device specific dma_to_pfn() macro. This is the reverse of the pfn_to_dma() macro which is used to get the DMA address for the device. This gives us a PFN mask, which we can then check against the PFN limit of the DMA zone. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 10月, 2013 1 次提交
-
-
由 Sergey Dyasly 提交于
With LPAE enabled, physical address space is larger than 4GB. Allow mapping any part of it via /dev/mem by using PHYS_MASK to determine valid range. PHYS_MASK covers 40 bits with LPAE enabled and 32 bits otherwise. Reported-by: NVassili Karpov <av1474@comtv.ru> Signed-off-by: NSergey Dyasly <dserrg@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 10月, 2013 1 次提交
-
-
由 Russell King 提交于
DMA mapping permissions were being derived from pgprot_kernel directly without using PAGE_KERNEL. This causes them to be marked with executable permission, which is not what we want. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2013 3 次提交
-
-
由 Ben Dooks 提交于
If we are in BE8 mode, we must deal with the instruction stream being in LE order when data is being loaded in BE order. Ensure the data is swapped before processing to avoid thre following: Change to using <asm/opcodes.h> to provide the necessary conversion functions to change the byte ordering. This stops the following warning messages from the kernel on a fault: Unhandled fault: alignment exception (0x001) at 0xbfa09567 Alignment trap: not handling instruction 030091e8 at [<80333e8c>] Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Ben Dooks 提交于
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
由 Ben Dooks 提交于
The Kconfig for arch/arm/mach-ixp4xx has a local definition of ARCH_SUPPORTS_BIG_ENDIAN which could be used elsewhere. This means that if IXP4xx is selected and this symbol is selected eleswhere then an warning is produced. Clean the following error up by making the symbol be selected by the main ARCH_IXP4XX definition and have a common definition in arch/arm/mm/Kconfig warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
- 15 10月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
This reverts commit 10bcdfb8. There is no consensus on the bindings for the reserved memory, so the code for handing it will be reverted. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-