- 05 10月, 2015 1 次提交
-
-
由 Mark Salyzyn 提交于
This is the arm64 portion of commit 45cac65b ("readahead: fault retry breaks mmap file read random detection"), which was absent from the initial port and has since gone unnoticed. The original commit says: > .fault now can retry. The retry can break state machine of .fault. In > filemap_fault, if page is miss, ra->mmap_miss is increased. In the second > try, since the page is in page cache now, ra->mmap_miss is decreased. And > these are done in one fault, so we can't detect random mmap file access. > > Add a new flag to indicate .fault is tried once. In the second try, skip > ra->mmap_miss decreasing. The filemap_fault state machine is ok with it. With this change, Mark reports that: > Random read improves by 250%, sequential read improves by 40%, and > random write by 400% to an eMMC device with dm crypto wrapped around it. Cc: Shaohua Li <shli@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: NMark Salyzyn <salyzyn@android.com> Signed-off-by: NRiley Andrews <riandrews@android.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 9月, 2015 1 次提交
-
-
由 Jisheng Zhang 提交于
If CMA is turned on and CMA size is set to zero, kernel should behave as if CMA was not enabled at compile time. Every dma allocation should check existence of cma area before requesting memory. Arm has done this by commit e464ef16 ("arm: dma-mapping: add checking cma area initialized"), also do this for arm64. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
We don't want to expose the DCC to userspace, particularly as there is a kernel console driver for it. This patch resets mdscr_el1 to disable userspace access to the DCC registers on the cold boot path. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
The arm64 booting document requires that the bootloader has cleaned the kernel image to the PoC. However, when a CPU re-enters the kernel due to either a CPU hotplug "on" event or resuming from a low-power state (e.g. cpuidle), the kernel text may in-fact be dirty at the PoU due to things like alternative patching or even module loading. Thanks to I-cache speculation with the MMU off, stale instructions could be fetched prior to enabling the MMU, potentially leading to crashes when executing regions of code that have been modified at runtime. This patch addresses the issue by ensuring that the local I-cache is invalidated immediately after a CPU has enabled its MMU but before jumping out of the identity mapping. Any stale instructions fetched from the PoC will then be discarded and refetched correctly from the PoU. Patching kernel text executed prior to the MMU being enabled is prohibited, so the early entry code will always be clean. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 8月, 2015 1 次提交
-
-
由 Robin Murphy 提交于
Since __get_dma_pgprot() does The Right Thing(TM) in the non-coherent case, and the non-cacheable alias for DMA buffers is private to the kernel anyway, we can simplify things slightly and make the code more readable by just using PAGE_KERNEL as the base pgprot. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 7月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
Currently create_mapping is marked with __ref, apparently because it refers to early_alloc. However, create_mapping has no logic to prevent erroneous use of early_alloc after it has been freed, and is only ever called by __init functions anyway. Thus the __ref marker is misleading and unnecessary. Instead, this patch marks create_mapping as __init, resulting in warnings if it is used from a a non __init functions, and allowing its memory to be reclaimed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Wang Long 提交于
It is not needed after booting, this patch moves the free_initrd_mem() function to the __init section. This patch also make keep_initrd __initdata, to reduce kernel size. Signed-off-by: NWang Long <long.wanglong@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 9 次提交
-
-
由 Dave P Martin 提交于
Currently, the minimal default BUG() implementation from asm- generic is used for arm64. This patch uses the BRK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG and WARN sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit offsets instead of absolute pointers. (Note that this limits the kernel size to 2GB.) Traps are registered at arch_initcall time for aarch64, but BUG has minimal real dependencies and it is desirable to be able to generate bug splats as early as possible. This patch redirects all debug exceptions caused by BRK directly to bug_handler() until the full debug exception support has been initialised. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Daniel Thompson 提交于
Convert the dynamic patching for ARM64_WORKAROUND_CLEAN_CACHE over to the newly added alternative assembler macros. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jisheng Zhang 提交于
Remove paragraph about writing to the Free Software Foundation's mailing address from GPL notice. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
The default dma_common_get_sgtable() implementation relies on the CPU address of the buffer being a regular lowmem address. This is not always the case on arm64, since allocations from the various DMA pools may have remapped vmalloc addresses, rendering the use of virt_to_page() invalid. Fix this by providing our own implementation based on the fact that we can safely derive a physical address from the DMA address in both cases. CC: Jon Medhurst <tixy@linaro.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [will: made static] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8.1 architecture extensions introduce support for hardware updates of the access and dirty information in page table entries. With TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit cleared in the page table, instead of raising an access flag fault the CPU sets the actual page table entry bit. To ensure that kernel modifications to the page tables do not inadvertently revert a change introduced by hardware updates, the exclusive monitor (ldxr/stxr) is adopted in the pte accessors. When TCR_EL1.HD is enabled, a write access to a memory location with the DBM (Dirty Bit Management) bit set in the corresponding pte automatically clears the read-only bit (AP[2]). Such DBM bit maps onto the Linux PTE_WRITE bit and to check whether a writable (DBM set) page is dirty, the kernel tests the PTE_RDONLY bit. In order to allow read-only and dirty pages, the kernel needs to preserve the software dirty bit. The hardware dirty status is transferred to the software dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and pte_modify(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Salter 提交于
Commit 68234df4 ("arm64: kill flush_cache_all()") removed soft_reset() from the kernel. This was the only caller of setup_mm_for_reboot(), so remove that also. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Since commit 9d3bfbb4 ("arm64: Combine coherent and non-coherent swiotlb dma_ops"), __dma_common_mmap is no longer shared between two callers, so roll it into the remaining one. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 7月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Patch 63a4aea5 ("of: clean-up unnecessary libfdt include paths") removed all explicit libfdt include paths, since those are no longer necessary after the latest dtc upgrade. However, this one snuck in during the same merge window. Remove it. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 7月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
Commit 86dca36e introduced ratelimited usage for 'unhandled_signal' messages. The commit checks the ratelimit irrespective of whether the signal is handled or not, which is wrong and leads to false reports like the below in dmesg : __do_user_fault: 127 callbacks suppressed Do the ratelimit check only if the signal is unhandled. Fixes: 86dca36e ("arm64: use private ratelimit state along with show_unhandled_signals") Cc: Vladimir Murzin <Vladimir.Murzin@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 7月, 2015 2 次提交
-
-
由 Christoffer Dall 提交于
The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Fixes: 084bd298 ("ARM64: mm: HugeTLB support.") Cc: <stable@vger.kernel.org> # 3.11+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This fixes a build failure under STRICT_MM_TYPECHECKS, by adding a missing pgprot_val() around a pgport_t reference. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 6月, 2015 1 次提交
-
-
由 Zhang Zhen 提交于
Currently we have many duplicates in definitions of huge_pmd_unshare. In all architectures this function just returns 0 when CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N. This patch puts the default implementation in mm/hugetlb.c and lets these architectures use the common code. Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: David Rientjes <rientjes@google.com> Cc: James Yang <James.Yang@freescale.com> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2015 2 次提交
-
-
由 Vladimir Murzin 提交于
printk_ratelimit() shares the ratelimiting state with other callers what may lead to scenarios where at the time we want to print out debug information we already limited, so nothing appears in the dmesg - this makes exception-trace quite poor helper in debugging. Additionally, we have imbalance with some messages limited with global ratelimit state and other messages limited with their private state defined via pr_*_ratelimited(). To address this inconsistency show_unhandled_signals_ratelimited() macro is introduced and caller sites are converted to use it. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vladimir Murzin 提交于
Report unhandled SP/PC alignment faults if the show_unhandled_signals variable is set (via /proc/sys/debug/exception-trace). Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 6月, 2015 1 次提交
-
-
由 Dave P Martin 提交于
The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 6月, 2015 1 次提交
-
-
由 Suthikulpanit, Suravee 提交于
section 6.2.17 _CCA states that ARM platforms require ACPI _CCA object to be specified for DMA-cabpable devices. Therefore, this patch specifies ACPI_CCA_REQUIRED in arm64 Kconfig. In addition, to handle the case when _CCA is missing, arm64 would assign dummy_dma_ops to disable DMA capability of the device. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 12 6月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
After secondary CPU boot or hotplug, the active_mm of the idle thread is &init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1 and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the TTBR0_EL1 is already set to the reserved value, there is no need to perform any context reset. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
- 05 6月, 2015 1 次提交
-
-
由 Marc Zyngier 提交于
asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier). Fold the content of alternative-asm.h into alternative.h, and update the few users. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This splits off the reservation of the memory occupied by the FDT binary itself from the processing of the memory reservations it contains. This is necessary because the physical address of the FDT, which is needed to perform the reservation, may not be known to the FDT driver core, i.e., it may be mapped outside the linear direct mapping, in which case __pa() returns a bogus value. Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 5月, 2015 2 次提交
-
-
由 Mark Rutland 提交于
The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 David Hildenbrand 提交于
Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 5月, 2015 1 次提交
-
-
由 Jungseung Lee 提交于
This fix the below build error: arch/arm64/mm/dump.c: In function ‘ptdump_init’: arch/arm64/mm/dump.c:331:18: error: ‘VMEMMAP_START_NR’ undeclared (first use in this function) address_markers[VMEMMAP_START_NR].start_address = ^ arch/arm64/mm/dump.c:331:18: note: each undeclared identifier is reported only once for each function it appears in arch/arm64/mm/dump.c:333:18: error: ‘VMEMMAP_END_NR’ undeclared (first use in this function) address_markers[VMEMMAP_END_NR].start_address = ^ Acked-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 4月, 2015 1 次提交
-
-
由 Dean Nelson 提交于
__dma_alloc() does a PAGE_ALIGN() on the passed in size argument before doing anything else. __dma_free() does not. And because it doesn't, it is possible to leak memory should size not be an integer multiple of PAGE_SIZE. The solution is to add a PAGE_ALIGN() to __dma_free() like is done in __dma_alloc(). Additionally, this patch removes a redundant PAGE_ALIGN() from __dma_alloc_coherent(), since __dma_alloc_coherent() can only be called from __dma_alloc(), which already does a PAGE_ALIGN() before the call. Cc: stable@vger.kernel.org Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDean Nelson <dnelson@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 4月, 2015 1 次提交
-
-
由 Marek Szyprowski 提交于
Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha, ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures. It turned out that some drivers rely on this 'feature'. Allocated buffer might be also exposed to userspace with dma_mmap() call, so clearing it is desired from security point of view to avoid exposing random memory to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64 architecture with other implementations by unconditionally zeroing allocated buffer. Cc: <stable@vger.kernel.org> # v3.14+ Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 4月, 2015 4 次提交
-
-
由 Vladimir Murzin 提交于
Add support for memtest command line option. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
When an architecture fully supports randomizing the ELF load location, a per-arch mmap_rnd() function is used to find a randomized mmap base. In preparation for randomizing the location of ET_DYN binaries separately from mmap, this renames and exports these functions as arch_mmap_rnd(). Additionally introduces CONFIG_ARCH_HAS_ELF_RANDOMIZE for describing this feature on architectures that support it (which is a superset of ARCH_BINFMT_ELF_RANDOMIZE_PIE, since s390 already supports a separated ET_DYN ASLR from mmap ASLR without the ARCH_BINFMT_ELF_RANDOMIZE_PIE logic). Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: NIngo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
In preparation for splitting out ET_DYN ASLR, this refactors the use of mmap_rnd() to be used similarly to arm and x86. This additionally enables mmap ASLR on legacy mmap layouts, which appeared to be missing on arm64, and was already supported on arm. Additionally removes a copy/pasted declaration of an unused function. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
We would want to use number of page table level to define mm_struct. Let's expose it as CONFIG_PGTABLE_LEVELS. ARM64_PGTABLE_LEVELS is renamed to PGTABLE_LEVELS and defined before sourcing init/Kconfig: arch/Kconfig will define default value and it's sourced from init/Kconfig. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 3月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
Current implementation doesn't zero out the pages allocated. Honor the __GFP_ZERO flag and zero out if set. Cc: <stable@vger.kernel.org> # v3.14+ Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-