- 29 6月, 2015 2 次提交
-
-
由 Russell King 提交于
Add a nmessage to suggest that HIGHMEM is enabled when physical memory is truncated due to lack of virtual address space to map it in the low memory mapping. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
The memblock limit is currently used in find_limits to find the bounds for ZONE_NORMAL. The memblock limit may need to be rounded down a PMD size to ensure allocations are fully mapped though. This has the side effect of reducing the amount of memory in ZONE_NORMAL. Once all lowmem is mapped, it's safe to change the memblock limit back to include the unaligned section. Adjust the memblock limit after lowmem mapping is complete. Before: # cat /proc/zoneinfo | grep managed managed 62907 managed 424 After: # cat /proc/zoneinfo | grep managed managed 63331 Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 5月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings. Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs. This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NStefan Agner <stefan@agner.ch> Tested-by: NStefan Agner <stefan@agner.ch> Acked-by: NLaura Abbott <labbott@redhat.com> Tested-by: NHans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 1月, 2015 1 次提交
-
-
由 Grygorii Strashko 提交于
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2014 1 次提交
-
-
由 Jungseung Lee 提交于
set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseung Lee 提交于
Modern ARMv7-A/R cores optionally implement below new hardware feature: - PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported. This patch set PXN bit on user page table for preventing user code execution with privilege mode. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 11月, 2014 1 次提交
-
-
由 Russell King 提交于
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 10月, 2014 3 次提交
-
-
由 Kees Cook 提交于
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX permission changes during free_initmem, so that init memory can be reclaimed. This uses section size instead of PMD size to reduce memory lost to padding on non-LPAE systems. Based on work by Brad Spengler, Larry Bassel, and Laura Abbott. Signed-off-by: NKees Cook <keescook@chromium.org> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Kees Cook 提交于
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range. Based on patch by Rabin Vincent. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Rob Herring 提交于
With commit a05e54c1 ("ARM: 8031/2: change fixmap mapping region to support 32 CPUs"), the fixmap region was expanded to 2MB, but it precluded any other uses of the fixmap region. In order to support other uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the adjacent 1MB range 0xffe00000-0xfff00000 is availabe. Remove fixmap_page_table ptr and lookup the page table via the virtual address so that the fixmap region can span more that one pmd. The 2nd pmd is already created since it is shared with the vector page. Signed-off-by: NRob Herring <robh@kernel.org> [kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls] [kees: moved pte allocation outside of CONFIG_HIGHMEM] Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
- 26 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 8月, 2014 1 次提交
-
-
由 Russell King 提交于
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2014 1 次提交
-
-
由 Russell King 提交于
If init_mm.brk is not section aligned, the LPAE fixup code will miss updating the final PMD. Fix this by aligning map_end. Fixes: a77e0c7b ("ARM: mm: Recreate kernel mappings in early_paging_init()") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 6月, 2014 5 次提交
-
-
由 Russell King 提交于
This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Fix a long standing bug where, for ARMv6+, we don't fully ensure that the C code sets the same cache policy as the assembly code. This was introduced partially by commit 11179d8c ([ARM] 4497/1: Only allow safe cache configurations on ARMv6 and later) and also by adding SMP support. This patch sets the default cache policy based on the flags used by the assembly code, and then ensures that when a cache policy command line argument is used, we verify that on ARMv6, it matches the initial setup. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
adjust_cr() is not used anymore, so let's get rid of it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Keep all bits of alignment handling together. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Several places open-code this manipulation, let's consolidate this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 6月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
memblock is now fully integrated into the kernel and is the prefered method for tracking memory. Rather than reinvent the wheel with meminfo, migrate to using memblock directly instead of meminfo as an intermediate. Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
dsb st can be used to ensure completion of pending cache maintenance operations, so use it for the v7 cache maintenance operations. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 4月, 2014 1 次提交
-
-
由 Liu Hua 提交于
In 32-bit ARM systems, the fixmap mapping region can support no more than 14 CPUs(total: 896k; one CPU: 64K). And we can configure NR_CPUS up to 32. So there is a mismatch. This patch moves fixmapping region downwards to region 0xffc00000- 0xffe00000. Then the fixmap mapping region can support up to 32 CPUs. Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLiu Hua <sdu.liu@huawei.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 2月, 2014 2 次提交
-
-
由 Will Deacon 提交于
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
The stage-2 memory attributes are distinct from the Hyp memory attributes and the Stage-1 memory attributes. We were using the stage-1 memory attributes for stage-2 mappings causing device mappings to be mapped as normal memory. Add the S2 equivalent defines for memory attributes and fix the comments explaining the defines while at it. Add a prot_pte_s2 field to the mem_type struct and fill out the field for device mappings accordingly. Cc: <stable@vger.kernel.org> [3.9+] Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 12月, 2013 4 次提交
-
-
由 Russell King 提交于
The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 11月, 2013 1 次提交
-
-
由 Michal Simek 提交于
ECC policy can be applied to the whole system when this bit is implemented by SoC vendor (IMP - bit 9 - in L1 page table entry format). When this bit is not implemented by SoC vendor it doesn't mean that system has no other way how to do ECC. This patch ensures to show this message only when ECC is requested via cmd line ecc=on and runs on appropriate ARM core. Signed-off-by: NMichal Simek <michal.simek@xilinx.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 10月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
This patch adds a step in the init sequence, in order to recreate the kernel code/data page table mappings prior to full paging initialization. This is necessary on LPAE systems that run out of a physical address space outside the 4G limit. On these systems, this implementation provides a machine descriptor hook that allows the PHYS_OFFSET to be overridden in a machine specific fashion. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NR Sricharan <r.sricharan@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 01 8月, 2013 2 次提交
-
-
由 Russell King 提交于
If kuser helpers are not provided by the kernel, disable user access to the vectors page. With the kuser helpers gone, there is no reason for this page to be visible to userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the machine vector stubs into the page above the vector page, which we can prevent from being visible to userspace. Also move the reset stub, and place the swi vector at a location that the 'ldr' can get to it. This hides pointers into the kernel which could give valuable information to attackers, and reduces the number of exploitable instructions at a fixed address. Cc: <stable@vger.kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 7月, 2013 1 次提交
-
-
由 Russell King 提交于
struct machine_desc records are defined everywhere as a 'const' structure, but unfortuantely it loses its const-ness through the use of linker magic - the symbols which surround the section are not declared const so it becomes possible not to use 'const' for pointers to these const structures. Let's fix this oversight - all pointers to these structures should be marked const too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 7月, 2013 1 次提交
-
-
由 Russell King 提交于
When map_lowmem() runs, and processes a memory bank whose start or end is not section-aligned, memory must be allocated to store the 2nd-level page tables. Those allocations are made by calling memblock_alloc(). At this point, the only memory that is free *and* mapped is memory which has already been mapped by map_lowmem() itself. For this reason, we must calculate the first point at which map_lowmem() will need to allocate memory, and set the memblock allocation limit to a lower address, so that memblock_alloc() is guaranteed to return memory that is already mapped. This patch enhances sanity_check_meminfo() to calculate that memory address, and pass it to memblock_set_current_limit(), rather than just assuming the limit is arm_lowmem_limit. The algorithm applied is: * Default memblock_limit to arm_lowmem_limit in the absence of any other limit; arm_lowmem_limit is the highest memory that is mapped by map_lowmem(). * While walking the list of memblocks, if the start of a block is not aligned, 2nd-level page tables will need to be allocated to map the first few pages of the block. Hence, the memblock_limit must be before the start of the block. * Similarly, if the end of any block is not aligned, 2nd-level page tables will need to be allocated to map the last few pages of the block. Hence, the memblock_limit must point at the end of the block, rounded down to section-alignment. * The memory blocks are assumed to be sorted in address order, so the first unaligned block start or end is used to set the limit. With this algorithm, the start or end of almost any bank can be non- section-aligned. The only exception is that the start of bank 0 must be section-aligned, since otherwise memory would need to be allocated when mapping the start of bank 0, which occurs before any free memory is mapped. [swarren, wrote commit description, rewrote calculation of memblock_limit] Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 7月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
Failure to add the mapping created in debug_ll_io_init() can lead to the BUG_ON() triggering in lib/ioremap.c:27 if the static virtual address decided for the debug_ll mapping overlaps with another mapping that is created later. This happens because the generic ioremap code has no idea there is a mapping there and it tries to place a mapping in the same location and blows up when it sees that there is a pte already present. kernel BUG at lib/ioremap.c:27! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-rc2-00042-g2af0c67-dirty #316 task: ef088000 ti: ef082000 task.ti: ef082000 PC is at ioremap_page_range+0x16c/0x198 LR is at ioremap_page_range+0xf0/0x198 pc : [<c04cb874>] lr : [<c04cb7f8>] psr: 20000113 sp : ef083e78 ip : af140000 fp : ef083ebc r10: ef7fc100 r9 : ef7fc104 r8 : 000af174 r7 : 00000647 r6 : beffffff r5 : f004c000 r4 : f0040000 r3 : af173417 r2 : 16440653 r1 : af173e07 r0 : ef7fc8fc Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 10c5787d Table: 8020406a DAC: 00000015 Process swapper/0 (pid: 1, stack limit = 0xef082238) Stack: (0xef083e78 to 0xef084000) 3e60: 00040000 ef083eec 3e80: bf134000 f004bfff c0207c00 f004c000 c02fc120 f000c000 c15e7800 00040000 3ea0: ef083eec 00000647 c098ba9c c0953544 ef083edc ef083ec0 c021b82c c04cb714 3ec0: c09cdc50 00000040 ef0f1e00 ef1003c0 ef083f14 ef083ee0 c09535bc c021b7bc 3ee0: c0953544 c04d0c6c c094e2cc c1600be4 c07440c4 c09a6888 00000002 c0a15f00 3f00: ef082000 00000000 ef083f54 ef083f18 c0208728 c0953550 00000002 c1600bfc 3f20: c08e3fac c0839918 ef083f54 c1600b80 c09a6888 c0a15f00 0000008b c094e2cc 3f40: c098ba9c c098bab8 ef083f94 ef083f58 c094ea0c c020865c 00000002 00000002 3f60: c094e2cc 00000000 c025b674 00000000 c06ff860 00000000 00000000 00000000 3f80: 00000000 00000000 ef083fac ef083f98 c06ff878 c094e910 00000000 00000000 3fa0: 00000000 ef083fb0 c020efe8 c06ff86c 00000000 00000000 00000000 00000000 3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 3fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 c0595108 [<c04cb874>] (ioremap_page_range+0x16c/0x198) from [<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4) [<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4) from [<c09535bc>] (atomic_pool_init+0x78/0x128) [<c09535bc>] (atomic_pool_init+0x78/0x128) from [<c0208728>] (do_one_initcall+0xd8/0x198) [<c0208728>] (do_one_initcall+0xd8/0x198) from [<c094ea0c>] (kernel_init_freeable+0x108/0x1d0) [<c094ea0c>] (kernel_init_freeable+0x108/0x1d0) from [<c06ff878>] (kernel_init+0x18/0xf4) [<c06ff878>] (kernel_init+0x18/0xf4) from [<c020efe8>] (ret_from_fork+0x14/0x20) Code: e50b0040 ebf54b2f e51b0040 eaffffee (e7f001f2) Fix it by telling generic layers about the static mapping via iotable_init(). This also has the nice side effect of letting you see the mapping in procfs' vmallocinfo file. Cc: Rob Herring <rob.herring@calxeda.com> Cc: Stephen Warren <swarren@nvidia.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 6月, 2013 1 次提交
-
-
由 Po-Yu Chuang 提交于
This bug was introduced in commit e651eab0. Some v4/v5 platforms failed to boot due to this. Signed-off-by: NPo-Yu Chuang <ratbert.chuang@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 5月, 2013 4 次提交
-
-
由 Cyril Chemparathy 提交于
This patch cleans up the highmem sanity check code by simplifying the range checks with a pre-calculated size_limit. This patch should otherwise have no functional impact on behavior. This patch also removes a redundant (bank->start < vmalloc_limit) check, since this is already covered by the !highmem condition. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
On Keystone platforms, physical memory is entirely outside the 32-bit addressible range. Therefore, the (bank->start > ULONG_MAX) check below marks the entire system memory as highmem, and this causes unpleasentness all over. This patch eliminates the extra bank start check (against ULONG_MAX) by checking bank->start against the physical address corresponding to vmalloc_min instead. In the process, this patch also cleans up parts of the highmem sanity check code by removing what has now become a redundant check for banks that entirely overlap with the vmalloc range. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Cyril Chemparathy 提交于
This patch modifies the highmem sanity checking code to use physical addresses instead. This change eliminates the wrap-around problems associated with the original virtual address based checks, and this simplifies the code a bit. The one constraint imposed here is that low physical memory must be mapped in a monotonically increasing fashion if there are multiple banks of memory, i.e., x < y must => pa(x) < pa(y). Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Vitaly Andrianov 提交于
This patch fixes the alloc_init_pud() function to use phys_addr_t instead of unsigned long when passing in the phys argument. This is an extension to commit 97092e0c (ARM: pgtable: use phys_addr_t for physical addresses), which applied similar changes elsewhere in the ARM memory management code. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Signed-off-by: NCyril Chemparathy <cyril@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 5月, 2013 1 次提交
-
-
由 Maxime Ripard 提交于
More and more sub-architectures are using only the debug_ll_io_init function as the map_io function. Make the core code call this function if no function is specified in the machine description to remove some boilerplate code. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Acked-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-