- 01 3月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
Commit c031a421 ("arm64: kaslr: randomize the linear region") implements randomization of the linear region, by subtracting a random multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping of system RAM to move upwards in the linear region, and at the same time causes memstart_addr to assume a value which may be negative if the offset of system RAM in the physical space is smaller than its offset relative to PAGE_OFFSET in the virtual space. Since memstart_addr is effectively an offset now, redefine its type as s64 so that expressions involving shifting or division preserve its sign. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
In the boot log, instead of listing .init first, list .text, .rodata, .init and .data in the same order they appear in memory Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 2月, 2016 2 次提交
-
-
由 Jeremy Linton 提交于
Currently the .rodata section is actually still executable when DEBUG_RODATA is enabled. This changes that so the .rodata is actually read only, no execute. It also adds the .rodata section to the mem_init banner. Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Miles Chen 提交于
Remove the unnecessary boundary check since there is a huge gap between user and kernel address that they would never overlap. (arm64 does not have enough levels of page tables to cover 64-bit virtual address) See Documentation/arm64/memory.txt Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 2月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been provided by the bootloader, randomize the placement of RAM inside the linear region if sufficient space is available. For instance, on a 4KB granule/3 levels kernel, the linear region is 256 GB in size, and we can choose any 1 GB aligned offset that is far enough from the top of the address space to fit the distance between the start of the lowest memblock and the top of the highest memblock. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 2月, 2016 3 次提交
-
-
由 Ard Biesheuvel 提交于
This relaxes the kernel Image placement requirements, so that it may be placed at any 2 MB aligned offset in physical memory. This is accomplished by ignoring PHYS_OFFSET when installing memblocks, and accounting for the apparent virtual offset of the kernel Image. As a result, virtual address references below PAGE_OFFSET are correctly mapped onto physical references into the kernel Image regardless of where it sits in memory. Special care needs to be taken for dealing with memory limits passed via mem=, since the generic implementation clips memory top down, which may clip the kernel image itself if it is loaded high up in memory. To deal with this case, we simply add back the memory covering the kernel image, which may result in more memory to be retained than was passed as a mem= parameter. Since mem= should not be considered a production feature, a panic notifier handler is installed that dumps the memory limit at panic time if one was set. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Before deferring the assignment of memstart_addr in a subsequent patch, to the moment where all memory has been discovered and possibly clipped based on the size of the linear region and the presence of a mem= command line parameter, we need to ensure that memstart_addr is not used to perform __va translations before it is assigned. One such use is in the generic early DT discovery of the initrd location, which is recorded as a virtual address in the globals initrd_start and initrd_end. So wire up the generic support to declare the initrd addresses, and implement it without __va() translations, and perform the translation after memstart_addr has been assigned. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This moves the module area to right before the vmalloc area, and moves the kernel image to the base of the vmalloc area. This is an intermediate step towards implementing KASLR, which allows the kernel image to be located anywhere in the vmalloc area. Since other subsystems such as hibernate may still need to refer to the kernel text or data segments via their linears addresses, both are mapped in the linear region as well. The linear alias of the text region is mapped read-only/non-executable to prevent inadvertent modification or execution. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 12月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Currently we treat the alternatives separately from other data that's only used during initialisation, using separate .altinstructions and .altinstr_replacement linker sections. These are freed for general allocation separately from .init*. This is problematic as: * We do not remove execute permissions, as we do for .init, leaving the memory executable. * We pad between them, making the kernel Image bianry up to PAGE_SIZE bytes larger than necessary. This patch moves the two sections into the contiguous region used for .init*. This saves some memory, ensures that we remove execute permissions, and allows us to remove some code made redundant by this reorganisation. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 12月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 12月, 2015 1 次提交
-
-
由 Jisheng Zhang 提交于
These functions/variables are not needed after booting, so mark them as __init or __initdata. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 10月, 2015 1 次提交
-
-
由 Robin Murphy 提交于
Trying to build with CONFIG_ZONE_DMA=n leaves visible references to the now-undefined ZONE_DMA, resulting in a syntax error. Hide the references behind an #ifdef instead of using IS_ENABLED. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 10月, 2015 1 次提交
-
-
由 Linus Walleij 提交于
This prints out the virtual memory assigned to KASan in the boot crawl along with other memory assignments, if and only if KASan is activated. Example dmesg from the Juno Development board: Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata, 2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved) Virtual kernel memory layout: kasan : 0xffffff8000000000 - 0xffffff9000000000 ( 64 GB) vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000 ( 182 GB) vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbdc2000000 - 0xffffffbdc3fc0000 ( 31 MB actual) fixed : 0xffffffbffabfd000 - 0xffffffbffac00000 ( 12 KB) PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 MB) modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB) memory : 0xffffffc000000000 - 0xffffffc07f000000 ( 2032 MB) .init : 0xffffffc0007f5000 - 0xffffffc00084a000 ( 340 KB) .text : 0xffffffc000080000 - 0xffffffc0007f45b4 ( 7634 KB) .data : 0xffffffc000850000 - 0xffffffc0008bf200 ( 445 KB) Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 7月, 2015 1 次提交
-
-
由 Wang Long 提交于
It is not needed after booting, this patch moves the free_initrd_mem() function to the __init section. This patch also make keep_initrd __initdata, to reduce kernel size. Signed-off-by: NWang Long <long.wanglong@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 6月, 2015 1 次提交
-
-
由 Dave P Martin 提交于
The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This splits off the reservation of the memory occupied by the FDT binary itself from the processing of the memory reservations it contains. This is necessary because the physical address of the FDT, which is needed to perform the reservation, may not be known to the FDT driver core, i.e., it may be mapped outside the linear direct mapping, in which case __pa() returns a bogus value. Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 4月, 2015 1 次提交
-
-
由 Vladimir Murzin 提交于
Add support for memtest command line option. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 2月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
With commit 3690951f (arm64: Use swiotlb late initialisation), the swiotlb buffer size is limited to MAX_ORDER_NR_PAGES. However, there are platforms with 32-bit only devices that require bounce buffering via swiotlb. This patch changes the swiotlb initialisation to an early 64MB memblock allocation. In order to get the swiotlb buffer correctly allocated (via memblock_virt_alloc_low_nopanic), this patch also defines ARCH_LOW_ADDRESS_LIMIT to the maximum physical address capable of 32-bit DMA. Reported-by: NKefeng Wang <wangkefeng.wang@huawei.com> Tested-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but commit d1e6dc91 ("arm64: Add architectural support for PCI") extended this to cover the full 32MiB. The final 8KiB of this 32MiB is also allocated for the fixmap, allowing for potential clashes between the two. This change was masked by assumptions in mem_init and the page table dumping code, which assumed the I/O space to be 16MiB long through seaparte hard-coded definitions. This patch changes the definition of the PCI I/O space allocation to live in asm/memory.h, along with the other VA space allocations. As the fixmap allocation depends on the number of fixmap entries, this is moved below the PCI I/O space allocation. Both the fixmap and PCI I/O space are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB are moved over use new PCI_IO_{START,END} definitions, which will keep in sync with the size of the IO space (now restored to 16MiB). As a useful side effect, the use of the new PCI_IO_{START,END} definitions prevents a build issue in the dumping code due to a (now redundant) missing include of io.h for PCI_IOBASE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 1月, 2015 1 次提交
-
-
由 Laura Abbott 提交于
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 1月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
This patch partially reverts commit 421520ba (only the arm64 part). There is no guarantee that the boot-loader places other images like dtb in a different page than initrd start/end, especially when the kernel is built with 64KB pages. When this happens, such pages must not be freed. The free_reserved_area() already takes care of rounding up "start" and rounding down "end" to avoid freeing partially used pages. Cc: <stable@vger.kernel.org> # 3.17+ Reported-by: NPeter Maydell <Peter.Maydell@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 1 次提交
-
-
由 Andre Przywara 提交于
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 10月, 2014 1 次提交
-
-
由 Yalin Wang 提交于
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: NYalin Wang <yalin.wang@sonymobile.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 9月, 2014 1 次提交
-
-
由 Ganapatrao Kulkarni 提交于
Initializing max_mapnr using set_max_mapnr() helper function instead of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn, since it already contains it. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 9月, 2014 1 次提交
-
-
由 Mark Salter 提交于
Commit 86c8b27a: "arm64: ignore DT memreserve entries when booting in UEFI mode prevents early_init_fdt_scan_reserved_mem() from being called for arm64 kernels booting via UEFI. This was done because the kernel will use the UEFI memory map to determine reserved memory regions. That approach has problems in that early_init_fdt_scan_reserved_mem() also reserves the FDT itself and any node-specific reserved memory. By chance of some kernel configs, the FDT may be overwritten before it can be unflattened and the kernel will fail to boot. More subtle problems will result if the FDT has node specific reserved memory which is not really reserved. This patch has the UEFI stub remove the memory reserve map entries from the FDT as it does with the memory nodes. This allows early_init_fdt_scan_reserved_mem() to be called unconditionally so that the other needed reservations are made. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Leif Lindholm 提交于
UEFI provides its own method for marking regions to reserve, via the memory map which is also used to initialise memblock. So when using the UEFI memory map, ignore any memreserve entries present in the DT. Reported-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 7月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
Rather than guessing what the maximum vmmemap space should be, this patch allows the calculation based on the VA_BITS and sizeof(struct page). The vmalloc space extends to the beginning of the vmemmap space. Since the virtual kernel memory layout now depends on the build configuration, this patch removes the detailed description in Documentation/arm64/memory.txt in favour of information printed during kernel booting. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
ZONE_DMA is created to allow 32-bit only devices to access memory in the absence of an IOMMU. On systems where the memory starts above 4GB, it is expected that some devices have a DMA offset hardwired to be able to access the bottom of the memory. Linux currently supports DT bindings for the DMA offsets but they are not (easily) available early during boot. This patch tries to guess a DMA offset and assumes that ZONE_DMA corresponds to the 32-bit mask above the start of DRAM. Fixes: 2d5a5612 (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA) Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NMark Salter <msalter@redhat.com> Tested-by: NMark Salter <msalter@redhat.com> Tested-by: NAnup Patel <anup.patel@linaro.org>
-
- 10 7月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
When the CMA buffer is allocated, it is too early to know whether devices will require ZONE_DMA memory. This patch limits the CMA buffer to (DMA_BIT_MASK(32) + 1) if CONFIG_ZONE_DMA is enabled. In addition, it computes the dma_to_phys(DMA_BIT_MASK(32)) before the increment (no current functional change). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 4月, 2014 1 次提交
-
-
由 Rob Herring 提交于
Move the /memreserve/ processing and dtb memory reservations into early_init_fdt_scan_reserved_mem. This converts arm, arm64, and powerpc as they are the only users of early_init_fdt_scan_reserved_mem. memblock_reserve is safe to call on the same region twice, so the reservation check for the dtb in powerpc 32-bit reservations is safe to remove. Signed-off-by: NRob Herring <robh@kernel.org> Tested-by: NMichal Simek <michal.simek@xilinx.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Tested-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NStephen Chivers <schivers@csc.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Marek Szyprowski 提交于
Enable reserved memory initialization from device tree. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 27 2月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
Since arm64 does not support ISA, there is no need for early swiotlb initialisation. This patch switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(). A side effect of this is that GFP_DMA is used for the swiotlb buffer and devices with a 32-bit coherent mask are correctly supported. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
On arm64 we do not have two DMA zones, so it does not make sense to implement ZONE_DMA32. This patch changes ZONE_DMA32 with ZONE_DMA, the latter covering 32-bit dma address space to honour GFP_DMA allocations. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 12月, 2013 1 次提交
-
-
由 Laura Abbott 提交于
arm64 bit targets need the features CMA provides. Add the appropriate hooks, header files, and Kconfig to allow this to happen. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 10月, 2013 2 次提交
-
-
由 Rob Herring 提交于
Remove unnecessary prom.h include in preparation to make prom.h optional. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NGrant Likely <grant.likely@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Rob Herring 提交于
In order to unify the initrd scanning for DT across architectures, make arm64 use initrd_start and initrd_end instead of the physical addresses. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
-
- 24 7月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
On some PAE architectures, the entire range of physical memory could reside outside the 32-bit limit. These systems need the ability to specify the initrd location using 64-bit numbers. This patch globally modifies the early_init_dt_setup_initrd_arch() function to use 64-bit numbers instead of the current unsigned long. There has been quite a bit of debate about whether to use u64 or phys_addr_t. It was concluded to stick to u64 to be consistent with rest of the device tree code. As summarized by Geert, "The address to load the initrd is decided by the bootloader/user and set at that point later in time. The dtb should not be tied to the kernel you are booting" More details on the discussion can be found here: https://lkml.org/lkml/2013/6/20/690 https://lkml.org/lkml/2012/9/13/544Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-