- 07 10月, 2015 2 次提交
-
-
由 Will Deacon 提交于
update_mmu_cache() consists of a dsb(ishst) instruction so that new user mappings are guaranteed to be visible to the page table walker on exception return. In reality this can be a very expensive operation which is rarely needed. Removing this barrier shows a modest improvement in hackbench scores and , in the worst case, we re-take the user fault and establish that there was nothing to do. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andrey Ryabinin 提交于
In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere, replace it with VA_START. Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 10月, 2015 1 次提交
-
-
由 Steve Capper 提交于
6910fa16 ("arm64: enable PTE type bit in the mask for pte_modify") fixes a problem whereby a large block of PROT_NONE mapped memory is incorrectly mapped as block descriptors when mprotect is called. Unfortunately, a subtle bug was introduced by this fix to the THP logic. If one mmaps a large block of memory, then faults it such that it is collapsed into THPs; resulting calls to mprotect on this area of memory will lead to incorrect table descriptors being written instead of block descriptors. This is because pmd_modify calls pte_modify which is now allowed to modify the type of the page table entry. This patch reverts commit 6910fa16, and fixes the problem it was trying to address by adjusting PAGE_NONE to represent a table entry. Thus no change in pte type is required when moving from PROT_NONE to a different protection. Fixes: 6910fa16 ("arm64: enable PTE type bit in the mask for pte_modify") Cc: <stable@vger.kernel.org> # 4.0+ Cc: Feng Kan <fkan@apm.com> Reported-by: NGanapatrao Kulkarni <Ganapatrao.Kulkarni@caviumnetworks.com> Tested-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 9月, 2015 3 次提交
-
-
由 Will Deacon 提交于
Depending on CONFIG_ARM64_HW_AFDBM, we use either bit 57 or 51 of the pte to represent PTE_WRITE. Given that bit 51 is reserved prior to ARMv8.1, we can just use that bit regardless of the config option. That also matches what happens if a kernel configured with ARM64_HW_AFDBM=y is run on a CPU without the DBM functionality. Cc: Julien Grall <julien.grall@citrix.com> Tested-by: NJulien Grall <julien.grall@citrix.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The pte_modify() function with hardware AF/DBM enabled must transfer the hardware dirty information to the software PTE_DIRTY bit. However, it was setting this bit in newprot and the mask does not cover such bit. This patch sets PTE_DIRTY on the original pte which will be preserved in the returned value. Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") Cc: Julien Grall <julien.grall@citrix.com> Tested-by: NJulien Grall <julien.grall@citrix.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
Commit 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") introduced support for handling hardware updates of the access flag and dirty status. The PTE is automatically dirtied in hardware (if supported) by clearing the PTE_RDONLY bit when the PTE_DBM/PTE_WRITE bit is set. The pte_hw_dirty() macro was added to detect a hardware dirtied pte. The pte_dirty() macro checks for both software PTE_DIRTY and pte_hw_dirty(). Functions like pte_modify() clear the PTE_RDONLY bit since it is meant to be set in set_pte_at() when written to memory. In such cases, pte_hw_dirty() would return true even though such pte is clean. This patch changes pte_hw_dirty() to test the PTE_DBM/PTE_WRITE bit together with PTE_RDONLY. Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") Reported-by: NJulien Grall <julien.grall@citrix.com> Tested-by: NJulien Grall <julien.grall@citrix.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte, so fix the macro definition to use bitwise & instead of logical &&. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 3 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8.1 architecture extensions introduce support for hardware updates of the access and dirty information in page table entries. With TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit cleared in the page table, instead of raising an access flag fault the CPU sets the actual page table entry bit. To ensure that kernel modifications to the page tables do not inadvertently revert a change introduced by hardware updates, the exclusive monitor (ldxr/stxr) is adopted in the pte accessors. When TCR_EL1.HD is enabled, a write access to a memory location with the DBM (Dirty Bit Management) bit set in the corresponding pte automatically clears the read-only bit (AP[2]). Such DBM bit maps onto the Linux PTE_WRITE bit and to check whether a writable (DBM set) page is dirty, the kernel tests the PTE_RDONLY bit. In order to allow read-only and dirty pages, the kernel needs to preserve the software dirty bit. The hardware dirty status is transferred to the software dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and pte_modify(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Mark Brown reported an allnoconfig build failure in -next: Today's linux-next fails to build an arm64 allnoconfig due to "mm: make GUP handle pfn mapping unless FOLL_GET is requested" which causes: > arm64-allnoconfig > ../mm/gup.c:51:4: error: implicit declaration of function 'update_mmu_cache' [-Werror=implicit-function-declaration] Fix the error by moving the function to asm/pgtable.h, as is the case for most other architectures. Reported-by: NMark Brown <broonie@kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 4月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We would want to use number of page table level to define mm_struct. Let's expose it as CONFIG_PGTABLE_LEVELS. ARM64_PGTABLE_LEVELS is renamed to PGTABLE_LEVELS and defined before sourcing init/Kconfig: arch/Kconfig will define default value and it's sourced from init/Kconfig. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 2月, 2015 1 次提交
-
-
由 Feng Kan 提交于
Caught during Trinity testing. The pte_modify does not allow modification for PTE type bit. This cause the test to hang the system. It is found that the PTE can't transit from an inaccessible page (b00) to a valid page (b11) because the mask does not allow it. This happens when a big block of mmaped memory is set the PROT_NONE, then the a small piece is broken off and set to PROT_WRITE | PROT_READ cause a huge page split. Signed-off-by: NFeng Kan <fkan@apm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
LKP has triggered a compiler warning after my recent patch "mm: account pmd page tables to the process": mm/mmap.c: In function 'exit_mmap': >> mm/mmap.c:2857:2: warning: right shift count >= width of type [enabled by default] The code: > 2857 WARN_ON(mm_nr_pmds(mm) > 2858 round_up(FIRST_USER_ADDRESS, PUD_SIZE) >> PUD_SHIFT); In this, on tile, we have FIRST_USER_ADDRESS defined as 0. round_up() has the same type -- int. PUD_SHIFT. I think the best way to fix it is to define FIRST_USER_ADDRESS as unsigned long. On every arch for consistency. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. This patch also adjust __SWP_TYPE_SHIFT and increase number of bits availble for swap offset. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 1月, 2015 1 次提交
-
-
由 zhichang.yuan 提交于
For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 1月, 2015 1 次提交
-
-
由 Ard Biesheuvel 提交于
For UEFI, we need to install the memory mappings used for Runtime Services in a dedicated set of page tables. Add create_pgd_mapping(), which allows us to allocate and install those page table entries early. Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 24 12月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
This patch adds pgd_page definition in order to keep supporting HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page expression to align with pmd_page for readability. An introduction of pgd_page resolves the following build breakage under 4KB + 4Level memory management combo. mm/gup.c: In function 'gup_huge_pgd': mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration] head = pgd_page(orig); ^ mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast head = pgd_page(orig); Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> [catalin.marinas@arm.com: remove duplicate pmd_page definition] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 12月, 2014 1 次提交
-
-
由 Kirill A. Shutemov 提交于
As a small zero page, huge zero page should not be accounted in smaps report as normal page. For small pages we rely on vm_normal_page() to filter out zero page, but vm_normal_page() is not designed to handle pmds. We only get here due hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not necessary compatible on each and every architecture. Let's add separate codepath to handle pmds. follow_trans_huge_pmd() will detect huge zero page for us. We would need pmd_dirty() helper to do this properly. The patch adds it to THP-enabled architectures which don't yet have one. [akpm@linux-foundation.org: use do_div to fix 32-bit build] Signed-off-by: N"Kirill A. Shutemov" <kirill@shutemov.name> Reported-by: NFengguang Wu <fengguang.wu@intel.com> Tested-by: NFengwei Yin <yfw.kernel@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 10月, 2014 2 次提交
-
-
由 Ard Biesheuvel 提交于
Now that we support read-only memslots, we need to make sure that pass-through device mappings are not mapped writable if the guest has requested them to be read-only. The existing implementation already honours this by calling kvm_set_s2pte_writable() on the new pte in case of writable mappings, so all we need to do is define the default pgprot_t value used for devices to be PTE_S2_RDONLY. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Steve Capper 提交于
Activate the RCU fast_gup for ARM64. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NDann Frazier <dann.frazier@canonical.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 10月, 2014 1 次提交
-
-
由 Liviu Dudau 提交于
Use the generic PCI domain and OF functions to provide support for PCI on arm64. [bhelgaas: Change comments to use generic PCI, not just PCIe. Nothing at this level is PCIe-specific.] Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
It's useful to be able to change individual bits in ptes at times. Introduce functions for this and update existing pte_mk* functions to use these primatives. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> [will: added missing inline keyword for new header functions] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 7月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
The architecture specification states that both DSB and ISB are required between page table modifications and subsequent memory accesses using the corresponding virtual address. When TLB invalidation takes place, the tlb_flush_* functions already have the necessary barriers. However, there are other functions like create_mapping() for which this is not the case. The patch adds the DSB+ISB instructions in the set_pte() function for valid kernel mappings. The invalid pte case is handled by tlb_flush_* and the user mappings in general have a corresponding update_mmu_cache() call containing a DSB. Even when update_mmu_cache() isn't called, the kernel can still cope with an unlikely spurious page fault by re-executing the instruction. In addition, the set_pmd, set_pud() functions gain an ISB for architecture compliance when block mappings are created. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NSteve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org>
-
- 23 7月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
Non-functional change to group together the pmd/pud definitions and reduce the amount of #if CONFIG_ARM64_PGTABLE_LEVELS. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
Rather than guessing what the maximum vmmemap space should be, this patch allows the calculation based on the VA_BITS and sizeof(struct page). The vmalloc space extends to the beginning of the vmemmap space. Since the virtual kernel memory layout now depends on the build configuration, this patch removes the detailed description in Documentation/arm64/memory.txt in favour of information printed during kernel booting. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
Rather than having several Kconfig options, define int ARM64_PGTABLE_LEVELS which will be also useful in converting some of the pgtable macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Jungseok Lee 提交于
This patch adds virtual address space size and a level of translation tables to kernel configuration. It facilicates introduction of different MMU options, such as 4KB + 4 levels, 16KB + 4 levels and 64KB + 3 levels, easily. The idea is based on the discussion with Catalin Marinas: http://www.spinics.net/linux/lists/arm-kernel/msg319552.htmlSigned-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
- 17 7月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
Just keep the asm/page.h definition as this is included in vmlinux.lds.S as well. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 04 7月, 2014 1 次提交
-
-
由 Steve Capper 提交于
The define ARM64_64K_PAGES is tested for rather than CONFIG_ARM64_64K_PAGES. Correct that typo here. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2014 1 次提交
-
-
由 Will Deacon 提交于
This should be a plain old '&' and could easily lead to undefined behaviour if the target of a pmd_mknotpresent invocation was the same as the parameter. Fixes: 9c7e535f (arm64: mm: Route pmd thp functions through pte equivalents) Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # v3.15 Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte equivalents") changed the pmd manipulator and accessor functions to convert the target pmd to a pte, process it with the pte functions, then convert it back. Along the way, we gained support for PTE_WRITE, however this is completely ignored by set_pmd_at, and so we fail to set the PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like CoW not working). Partially reverting the offending commit (by making use of PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions) leads to further issues because pmd_write can then return potentially incorrect values for page table entries marked as RDONLY, leading to BUG_ON(pmd_write(entry)) tripping under some THP workloads. This patch fixes the issue by routing set_pmd_at through set_pte_at, which correctly takes the PTE_WRITE flag into account. Given that THP mappings are always anonymous, the additional cache-flushing code in __sync_icache_dcache won't impose any significant overhead as the flush will be skipped. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NSteve Capper <steve.capper@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 5月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2014 3 次提交
-
-
由 Steve Capper 提交于
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NJungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 5月, 2014 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert+renesas@linux-m68k.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 24 3月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 3月, 2014 1 次提交
-
-
由 Steve Capper 提交于
Rather than have separate hugetlb and transparent huge page pmd manipulation functions, re-wire our thp functions to simply call the pte equivalents. This allows THP to take advantage of the new PTE_WRITE logic introduced in: c2c93e5b arm64: mm: Introduce PTE_WRITE To represent splitting THPs we use the PTE_SPECIAL bit as this is not used for pmds. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-