- 20 8月, 2014 1 次提交
-
-
由 Leif Lindholm 提交于
UEFI provides its own method for marking regions to reserve, via the memory map which is also used to initialise memblock. So when using the UEFI memory map, ignore any memreserve entries present in the DT. Reported-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 7月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
Rather than guessing what the maximum vmmemap space should be, this patch allows the calculation based on the VA_BITS and sizeof(struct page). The vmalloc space extends to the beginning of the vmemmap space. Since the virtual kernel memory layout now depends on the build configuration, this patch removes the detailed description in Documentation/arm64/memory.txt in favour of information printed during kernel booting. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
Rather than having several Kconfig options, define int ARM64_PGTABLE_LEVELS which will be also useful in converting some of the pgtable macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
ZONE_DMA is created to allow 32-bit only devices to access memory in the absence of an IOMMU. On systems where the memory starts above 4GB, it is expected that some devices have a DMA offset hardwired to be able to access the bottom of the memory. Linux currently supports DT bindings for the DMA offsets but they are not (easily) available early during boot. This patch tries to guess a DMA offset and assumes that ZONE_DMA corresponds to the 32-bit mask above the start of DRAM. Fixes: 2d5a5612 (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA) Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NMark Salter <msalter@redhat.com> Tested-by: NMark Salter <msalter@redhat.com> Tested-by: NAnup Patel <anup.patel@linaro.org>
-
- 10 7月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2014 1 次提交
-
-
由 Mark Salter 提交于
The __cpu_clear_user_page() and __cpu_copy_user_page() functions are not currently exported. This prevents modules from using clear_user_page() and copy_user_page(). Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 7月, 2014 1 次提交
-
-
由 Steve Capper 提交于
The __sync_icache_dcache routine will only flush the dcache for the first page of a compound page, potentially leading to stale icache data residing further on in a hugetlb page. This patch addresses this issue by taking into consideration the order of the page when flushing the dcache. Reported-by: NMark Brown <broonie@linaro.org> Tested-by: NMark Brown <broonie@linaro.org> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
-
- 18 6月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
When the CMA buffer is allocated, it is too early to know whether devices will require ZONE_DMA memory. This patch limits the CMA buffer to (DMA_BIT_MASK(32) + 1) if CONFIG_ZONE_DMA is enabled. In addition, it computes the dma_to_phys(DMA_BIT_MASK(32)) before the increment (no current functional change). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 6月, 2014 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 5月, 2014 1 次提交
-
-
由 Mark Salter 提交于
The following happens when trying to run a kvm guest on a kernel configured for 64k pages. This doesn't happen with 4k pages: BUG: failure at include/linux/mm.h:297/put_page_testzero()! Kernel panic - not syncing: BUG! CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF 3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1 Call trace: [<fffffe0000096034>] dump_backtrace+0x0/0x16c [<fffffe00000961b4>] show_stack+0x14/0x1c [<fffffe000066e648>] dump_stack+0x84/0xb0 [<fffffe0000668678>] panic+0xf4/0x220 [<fffffe000018ec78>] free_reserved_area+0x0/0x110 [<fffffe000018edd8>] free_pages+0x50/0x88 [<fffffe00000a759c>] kvm_free_stage2_pgd+0x30/0x40 [<fffffe00000a5354>] kvm_arch_destroy_vm+0x18/0x44 [<fffffe00000a1854>] kvm_put_kvm+0xf0/0x184 [<fffffe00000a1938>] kvm_vm_release+0x10/0x1c [<fffffe00001edc1c>] __fput+0xb0/0x288 [<fffffe00001ede4c>] ____fput+0xc/0x14 [<fffffe00000d5a2c>] task_work_run+0xa8/0x11c [<fffffe0000095c14>] do_notify_resume+0x54/0x58 In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page() on the stage2 pgd which leads to the BUG in put_page_testzero(). This happens because a pud_huge() test in unmap_range() returns true when it should always be false with 2-level pages tables used by 64k pages. This patch removes support for huge puds if 2-level pagetables are being used. Signed-off-by: NMark Salter <msalter@redhat.com> [catalin.marinas@arm.com: removed #ifndef around PUD_SIZE check] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
-
- 16 5月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 2 次提交
-
-
由 Will Deacon 提交于
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2014 4 次提交
-
-
由 Steve Capper 提交于
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NJungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 5月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ritesh Harjani 提交于
Currently arm64 dma_ops is by default made coherent which makes it opposite in default policy from arm. Make default dma_ops to be noncoherent (same as arm), as currently there aren't any dma-capable drivers which assumes coherent ops Signed-off-by: NRitesh Harjani <ritesh.harjani@gmail.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Anderson 提交于
Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: NDave Anderson <anderson@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 5月, 2014 1 次提交
-
-
由 Mark Salter 提交于
At boot time, before switching to a virtual UEFI memory map, firmware expects UEFI memory and IO regions to be identity mapped whenever kernel makes runtime services calls. The existing early boot code creates an identity map of kernel text/data but this is not sufficient for UEFI. This patch adds a create_id_mapping() function which reuses the core code of the existing create_mapping(). Signed-off-by: NMark Salter <msalter@redhat.com> [ Fixed error message formatting (%pa). ] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 30 4月, 2014 1 次提交
-
-
由 Rob Herring 提交于
Move the /memreserve/ processing and dtb memory reservations into early_init_fdt_scan_reserved_mem. This converts arm, arm64, and powerpc as they are the only users of early_init_fdt_scan_reserved_mem. memblock_reserve is safe to call on the same region twice, so the reservation check for the dtb in powerpc 32-bit reservations is safe to remove. Signed-off-by: NRob Herring <robh@kernel.org> Tested-by: NMichal Simek <michal.simek@xilinx.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Tested-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NStephen Chivers <schivers@csc.com>
-
- 08 4月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
If the buffer needing cache invalidation for inbound DMA does start or end on a cache line aligned address, we need to use the non-destructive clean&invalidate operation. This issue was introduced by commit 7363590d (arm64: Implement coherent DMA API based on swiotlb). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJon Medhurst (Tixy) <tixy@linaro.org>
-
由 Mark Salter 提交于
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mark Salter 提交于
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
The current TCR register setting in arch/arm64/mm/proc.S assumes that TCR_EL1.TG* fields are one bit wide and bit 31 is RES1 (reserved, set to 1). With the addition of 16K pages (currently unsupported in the kernel), the TCR_EL1.TG* fields have been extended to two bits. This patch updates the corresponding Linux definitions and drops the bit 31 setting in proc.S in favour of the new macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJoe Sylve <joe.sylve@gmail.com>
-
- 24 3月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot appropriately for non coherent opperations. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
The current dma_ops do not specify an mmap function so maping falls back to the default implementation. There are at least two issues with using the default implementation: 1) The pgprot is always pgprot_noncached (strongly ordered) memory even with coherent operations 2) dma_common_mmap calls virt_to_page on the remapped non-coherent address which leads to invalid memory being mapped. Fix both these issue by implementing a custom mmap function which correctly accounts for remapped addresses and sets vm_pg_prot appropriately. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 3月, 2014 2 次提交
-
-
由 Radha Mohan Chintakuntla 提交于
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: NWill Deacon <Will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marek Szyprowski 提交于
Enable reserved memory initialization from device tree. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 04 3月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently we flush the entire dcache at boot within __cpu_setup, but this is unnecessary as the booting protocol demands that the dcache is invalid and off upon entering the kernel. The presence of the cache flush only serves to hide bugs in bootloaders, and is not safe in the presence of SMP. In an SMP boot scenario the CPUs enter coherency outside of the kernel, and the primary CPU enables its caches before bringing up secondary CPUs. Therefore if any secondary CPU has an entry in its cache (in violation of the boot protocol), the primary CPU might snoop it even if the secondary CPU's cache is disabled. The boot-time cache flush only serves to hide a firmware bug, and slows down a cpu boot unnecessarily. This patch removes the unnecessary boot-time cache flush. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: make __flush_dcache_all local only] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 2月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for DMA API cache maintenance on SoCs without hardware device cache coherency. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 2月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
Since arm64 does not support ISA, there is no need for early swiotlb initialisation. This patch switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(). A side effect of this is that GFP_DMA is used for the swiotlb buffer and devices with a 32-bit coherent mask are correctly supported. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
On arm64 we do not have two DMA zones, so it does not make sense to implement ZONE_DMA32. This patch changes ZONE_DMA32 with ZONE_DMA, the latter covering 32-bit dma address space to honour GFP_DMA allocations. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 2月, 2014 1 次提交
-
-
由 Ritesh Harjani 提交于
arm64_swiotlb_alloc/free_coherent name can be misleading somtimes with CMA support being enabled after this patch (c2104debc235b745265b64d610237a6833fd53) Change this name to be more generic: __dma_alloc/free_coherent Signed-off-by: NRitesh Harjani <ritesh.harjani@gmail.com> [catalin.marinas@arm.com: renamed arm64_swiotlb_dma_ops to coherent_swiotlb_dma_ops] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 2月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently pgd_alloc has a redundant NULL check in its return path that can be removed with no ill effects. With that removed it's also possible to return early and eliminate the new_pgd temporary variable. This patch applies said modifications, making the logic of pgd_alloc correspond 1-1 with that of pgd_free. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-