- 18 11月, 2015 1 次提交
-
-
由 Laura Abbott 提交于
The permissions in mark_rodata_ro trigger a build error with STRICT_MM_TYPECHECKS. Fix this by introducing PAGE_KERNEL_ROX for the same reasons as PAGE_KERNEL_RO. From Ard: "PAGE_KERNEL_EXEC has PTE_WRITE set as well, making the range writeable under the ARMv8.1 DBM feature, that manages the dirty bit in hardware (writing to a page with the PTE_RDONLY and PTE_WRITE bits both set will clear the PTE_RDONLY bit in that case)" Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 11月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA and resides at an offset that is not a multiple of 512 MB, the rounding that occurs in __map_memblock() and fixup_executable() results in incorrect regions being mapped. The following snippet from /sys/kernel/debug/kernel_page_tables shows how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000, the first 2 MB of memory (which may be inaccessible from non-secure EL1 or just reserved by the firmware) is inadvertently mapped into the end of the module region. ---[ Modules start ]--- 0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL ---[ Modules end ]--- ---[ Kernel Mapping ]--- 0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL 0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL 0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL 0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL 0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL 0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL The same issue is likely to occur on 16k pages kernels whose load address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to SWAPPER_BLOCK_SIZE instead of SECTION_SIZE. Fixes: da141706 ("arm64: add better page protections to arm64") Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLaura Abbott <labbott@redhat.com> Cc: <stable@vger.kernel.org> # 4.0+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 11月, 2015 1 次提交
-
-
由 Jisheng Zhang 提交于
split_pud and fixup_executable are only called from within mmu.c, so they can be declared static. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 11月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
The mapping permissions of the FDT are set to 'PAGE_KERNEL | PTE_RDONLY' in an attempt to map the FDT as read-only. However, not only does this break at build time under STRICT_MM_TYPECHECKS (since the two terms are of different types in that case), it also results in both the PTE_WRITE and PTE_RDONLY attributes to be set, which means the region is still writable under ARMv8.1 DBM (and an attempted write will simply clear the PT_RDONLY bit). So instead, define PAGE_KERNEL_RO (which already has an established meaning across architectures) and use that instead. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The new page table code that manipulates the PTE_CONT flags does so in a way that is inconsistent with STRICT_MM_TYPECHECKS. Fix it by using the correct combination of __pgprot() and pgprot_val(). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 10月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
We use section maps with 4K page size to create the swapper/idmaps. So far we have used !64K or 4K checks to handle the case where we use the section maps. This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to handle cases where we use section maps, instead of using the page size symbols. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 10月, 2015 1 次提交
-
-
由 Jeremy Linton 提交于
With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 10月, 2015 1 次提交
-
-
由 Will Deacon 提交于
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 7月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Currently create_mapping is marked with __ref, apparently because it refers to early_alloc. However, create_mapping has no logic to prevent erroneous use of early_alloc after it has been freed, and is only ever called by __init functions anyway. Thus the __ref marker is misleading and unnecessary. Instead, this patch marks create_mapping as __init, resulting in warnings if it is used from a a non __init functions, and allowing its memory to be reclaimed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Mark Salter 提交于
Commit 68234df4 ("arm64: kill flush_cache_all()") removed soft_reset() from the kernel. This was the only caller of setup_mm_for_reboot(), so remove that also. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 7月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
This fixes a build failure under STRICT_MM_TYPECHECKS, by adding a missing pgprot_val() around a pgport_t reference. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 4月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We would want to use number of page table level to define mm_struct. Let's expose it as CONFIG_PGTABLE_LEVELS. ARM64_PGTABLE_LEVELS is renamed to PGTABLE_LEVELS and defined before sourcing init/Kconfig: arch/Kconfig will define default value and it's sourced from init/Kconfig. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 3月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 3月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Fixmap indices are in the interval (FIX_HOLE, __end_of_fixed_addresses), but in __set_fixmap we only check idx <= __end_of_fixed_addresses, and therefore indices <= FIX_HOLE are erroneously accepted. If called with such an idx, __set_fixmap may corrupt page tables outside of the fixmap region. This patch ensures that we validate the idx against both endpoints of the interval. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 1月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
Commit 523d6e9f (arm64:mm: free the useless initial page table) introduced a BUG_ON checking for the allocation type but it was referring the early_alloc() function in the __init section. This patch changes the check to slab_is_available() and also relaxes the BUG to a WARN_ON_ONCE. Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 1月, 2015 3 次提交
-
-
由 Mark Rutland 提交于
The {pgd,pud,pmd}_bad family of macros have slightly fuzzy cross-architecture semantics, and seem to imply a populated entry that is not a next-level table, rather than a particular type of entry (e.g. a section map). In arm64 code, for those cases where we care about whether an entry is a section mapping, we can instead use the {pud,pmd}_sect macros to explicitly check for this case. This helps to document precisely what we care about, making the code easier to read, and allows for future relaxation of the *_bad macros to check for other "bad" entries. To that end this patch updates the table dumping and initial table setup to check for section mappings with {pud,pmd}_sect, and adds/restores BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none cases so as to catch remaining "bad" cases. In the fault handling code, show_pte is left with *_bad checks as it only cares about whether it can walk the next level table, and this path is used for both kernel and userspace fault handling. The former case will be followed by a die() where we'll report the address that triggered the fault, which can be useful context for debugging. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NSteve Capper <steve.capper@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
In paging_init, we call flush_cache_all, but this is backed by Set/Way operations which may not achieve anything in the presence of cache line migration and/or system caches. If the caches are already in an inconsistent state at this point, there is nothing we can do (short of flushing the entire physical address space by VA) to empty architected and system caches. As such, flush_cache_all only serves to mask other potential bugs. Hence, this patch removes the boot-time call to flush_cache_all. Immediately after the cache maintenance we flush the TLBs, but this is also unnecessary. Before enabling the MMU, the TLBs are invalidated, and thus are initially clean. When changing the contents of active tables (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required TLB maintenance following the update, and therefore no additional maintenance is required to ensure the new table entries are in effect. Since activating the MMU we will not have modified system register fields permitted to be cached in a TLB, and therefore do not need maintenance for any cached system register fields. Hence, the TLB flush is unnecessary. Shortly after the unnecessary TLB flush, we update TTBR0 to point to an empty zero page rather than the idmap, and flush the TLBs. This maintenance is necessary to remove the global idmap entries from the TLBs (as they would conflict with userspace mappings), and is retained. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NSteve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 1月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: NArd Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 1月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
The cachepolicy kernel parameter was intended to aid in the debugging of coherency issues, but it is fundamentally broken for several reasons: * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary CPUs may therefore use differ w.r.t. the attributes they apply to MT_NORMAL memory, resulting in a loss of coherency. * The cache maintenance using flush_dcache_all (based on Set/Way operations) is not guaranteed to empty a given CPU's cache hierarchy while said CPU has caches enabled, it cannot empty the caches of other coherent PEs, nor is it guaranteed to flush data to the PoC even when caches are disabled. * The TLBs are not invalidated around the modification of MAIR_EL1 and TCR_EL1, as required by the architecture (as both are permitted to be cached in a TLB). This may result in CPUs using attributes other than those expected for some memory accesses, resulting in a loss of coherency. * Exclusive accesses are not architecturally guaranteed to function as expected on memory marked as Write-Through or Non-Cacheable. Thus changing the attributes of MT_NORMAL away from the (architecurally safe) defaults may cause uses of these instructions (e.g. atomics) to behave erratically. Given this, the cachepolicy code cannot be used for debugging purposes as it alone is likely to cause coherency issues. This patch removes the broken cachepolicy code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 12 1月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
For UEFI, we need to install the memory mappings used for Runtime Services in a dedicated set of page tables. Add create_pgd_mapping(), which allows us to allocate and install those page table entries early. Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Currently, swapper_pg_dir and idmap_pg_dir share the init_mm mm_struct instance. To allow the introduction of other pg_dir instances, for instance, for UEFI's mapping of Runtime Services, make the struct_mm instance an explicit argument that gets passed down to the pmd and pte instantiation functions. Note that the consumers (pmd_populate/pgd_populate) of the mm_struct argument don't actually inspect it, but let's fix it for correctness' sake. Acked-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 25 11月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 11月, 2014 1 次提交
-
-
由 Min-Hua Chen 提交于
Use phys_addr_t for physical address in alloc_init_pud. Although phys_addr_t and unsigned long are 64 bit in arm64, it is better to use phys_addr_t to describe physical addresses. Signed-off-by: NMin-Hua Chen <orca.chen@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 11月, 2014 1 次提交
-
-
由 Min-Hua Chen 提交于
Use phys_addr_t for physical address in alloc_init_pud. Although phys_addr_t and unsigned long are 64 bit in arm64, it is better to use phys_addr_t to describe physical addresses. Signed-off-by: NMin-Hua Chen <orca.chen@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 10月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With 48-bit VA space, the 64K page configuration uses 3 levels instead of 2 and PUD_SIZE != PMD_SIZE. Since with 64K pages we only cover PMD_SIZE with the initial swapper_pg_dir populated in head.S, the memblock current_limit needs to be set accordingly in map_mem() to avoid allocating unmapped memory. The memblock current_limit is progressively increased as more blocks are mapped. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 9月, 2014 1 次提交
-
-
由 Mark Charlebois 提交于
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu. This code now works with both gcc and clang. Signed-off-by: NMark Charlebois <charlebm@gmail.com> Signed-off-by: NBehan Webster <behanw@converseincode.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 7月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
- 09 5月, 2014 2 次提交
-
-
由 Steve Capper 提交于
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NJungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 04 5月, 2014 1 次提交
-
-
由 Dave Anderson 提交于
Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: NDave Anderson <anderson@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 5月, 2014 1 次提交
-
-
由 Mark Salter 提交于
At boot time, before switching to a virtual UEFI memory map, firmware expects UEFI memory and IO regions to be identity mapped whenever kernel makes runtime services calls. The existing early boot code creates an identity map of kernel text/data but this is not sufficient for UEFI. This patch adds a create_id_mapping() function which reuses the core code of the existing create_mapping(). Signed-off-by: NMark Salter <msalter@redhat.com> [ Fixed error message formatting (%pa). ] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 08 4月, 2014 2 次提交
-
-
由 Mark Salter 提交于
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mark Salter 提交于
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 2月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With the 64K page size configuration, __create_page_tables in head.S maps enough memory to get started but using 64K pages rather than 512M sections with a single pgd/pud/pmd entry pointing to a pte table. create_mapping() may override the pgd/pud/pmd table entry with a block (section) one if the RAM size is more than 512MB and aligned correctly. For the end of this block to be accessible, the old TLB entry must be invalidated. Cc: <stable@vger.kernel.org> Reported-by: NMark Salter <msalter@redhat.com> Tested-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 8月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
-
- 14 6月, 2013 1 次提交
-
-
由 Steve Capper 提交于
In paging_init the memblock limit is set to restrict any addresses returned by early_alloc to fit within the initial direct kernel mapping in swapper_pg_dir. This allows map_mem to allocate puds, pmds and ptes from the initial direct kernel mapping. The limit stays low after paging_init() though, meaning any bootmem allocations will be from a restricted subset of memory. Gigabyte huge pages, for instance, are normally allocated from bootmem as their order (18) is too large for the default buddy allocator (MAX_ORDER = 11). This patch restores the memblock limit when map_mem has finished, allowing gigabyte huge pages (and other objects) to be allocated from all of bootmem. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-