1. 17 5月, 2022 1 次提交
  2. 08 3月, 2022 1 次提交
  3. 08 11月, 2021 1 次提交
    • A
      arm64: pgtable: make __pte_to_phys/__phys_to_pte_val inline functions · c7c386fb
      Arnd Bergmann 提交于
      gcc warns about undefined behavior the vmalloc code when building
      with CONFIG_ARM64_PA_BITS_52, when the 'idx++' in the argument to
      __phys_to_pte_val() is evaluated twice:
      
      mm/vmalloc.c: In function 'vmap_pfn_apply':
      mm/vmalloc.c:2800:58: error: operation on 'data->idx' may be undefined [-Werror=sequence-point]
       2800 |         *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot));
            |                                                 ~~~~~~~~~^~
      arch/arm64/include/asm/pgtable-types.h:25:37: note: in definition of macro '__pte'
         25 | #define __pte(x)        ((pte_t) { (x) } )
            |                                     ^
      arch/arm64/include/asm/pgtable.h:80:15: note: in expansion of macro '__phys_to_pte_val'
         80 |         __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
            |               ^~~~~~~~~~~~~~~~~
      mm/vmalloc.c:2800:30: note: in expansion of macro 'pfn_pte'
       2800 |         *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot));
            |                              ^~~~~~~
      
      I have no idea why this never showed up earlier, but the safest
      workaround appears to be changing those macros into inline functions
      so the arguments get evaluated only once.
      
      Cc: Matthew Wilcox <willy@infradead.org>
      Fixes: 75387b92 ("arm64: handle 52-bit physical addresses in page table entries")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Link: https://lore.kernel.org/r/20211105075414.2553155-1-arnd@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      c7c386fb
  4. 29 9月, 2021 1 次提交
  5. 26 8月, 2021 1 次提交
  6. 09 7月, 2021 2 次提交
  7. 02 7月, 2021 1 次提交
    • A
      mm: define default value for FIRST_USER_ADDRESS · fac7757e
      Anshuman Khandual 提交于
      Currently most platforms define FIRST_USER_ADDRESS as 0UL duplication the
      same code all over.  Instead just define a generic default value (i.e 0UL)
      for FIRST_USER_ADDRESS and let the platforms override when required.  This
      makes it much cleaner with reduced code.
      
      The default FIRST_USER_ADDRESS here would be skipped in <linux/pgtable.h>
      when the given platform overrides its value via <asm/pgtable.h>.
      
      Link: https://lkml.kernel.org/r/1620615725-24623-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Acked-by: Guo Ren <guoren@kernel.org>			[csky]
      Acked-by: Stafford Horne <shorne@gmail.com>		[openrisc]
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Acked-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>	[RISC-V]
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Chris Zankel <chris@zankel.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fac7757e
  8. 22 6月, 2021 1 次提交
  9. 26 5月, 2021 1 次提交
  10. 26 3月, 2021 1 次提交
  11. 10 3月, 2021 1 次提交
  12. 20 1月, 2021 1 次提交
    • W
      arm64: mm: Implement arch_wants_old_prefaulted_pte() · 0388f9c7
      Will Deacon 提交于
      On CPUs with hardware AF/DBM, initialising prefaulted PTEs as 'old'
      improves vmscan behaviour and does not appear to introduce any overhead
      elsewhere.
      
      Implement arch_wants_old_prefaulted_pte() to return 'true' if we detect
      hardware access flag support at runtime. This can be extended in future
      based on MIDR matching if necessary.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      0388f9c7
  13. 16 12月, 2020 1 次提交
    • K
      arm64: mremap speedup - enable HAVE_MOVE_PUD · f5308c89
      Kalesh Singh 提交于
      HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source
      and destination addresses are PUD-aligned.
      
      With HAVE_MOVE_PUD enabled it can be inferred that there is approximately
      a 19x improvement in performance on arm64.  (See data below).
      
      ------- Test Results ---------
      
      The following results were obtained using a 5.4 kernel, by remapping a
      PUD-aligned, 1GB sized region to a PUD-aligned destination.  The results
      from 10 iterations of the test are given below:
      
      Total mremap times for 1GB data on arm64. All times are in nanoseconds.
      
        Control          HAVE_MOVE_PUD
      
        1247761          74271
        1219896          46771
        1094792          59687
        1227760          48385
        1043698          76666
        1101771          50365
        1159896          52500
        1143594          75261
        1025833          61354
        1078125          48697
      
        1134312.6        59395.7    <-- Mean time in nanoseconds
      
      A 1GB mremap completion time drops from ~1.1 milliseconds to ~59
      microseconds on arm64.  (~19x speed up).
      
      Link: https://lkml.kernel.org/r/20201014005320.2233162-5-kaleshsingh@google.comSigned-off-by: NKalesh Singh <kaleshsingh@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Christian Brauner <christian.brauner@ubuntu.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Hassan Naveed <hnaveed@wavecomp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Minchan Kim <minchan@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f5308c89
  14. 03 12月, 2020 1 次提交
  15. 24 11月, 2020 2 次提交
  16. 11 11月, 2020 1 次提交
    • M
      arm64: consistently use reserved_pg_dir · 833be850
      Mark Rutland 提交于
      Depending on configuration options and specific code paths, we either
      use the empty_zero_page or the configuration-dependent reserved_ttbr0
      as a reserved value for TTBR{0,1}_EL1.
      
      To simplify this code, let's always allocate and use the same
      reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated
      (and hence pre-zeroed), and is also marked as read-only in the kernel
      Image mapping.
      
      Keeping this separate from the empty_zero_page potentially helps with
      robustness as the empty_zero_page is used in a number of cases where a
      failure to map it read-only could allow it to become corrupted.
      
      The (presently unused) swapper_pg_end symbol is also removed, and
      comments are added wherever we rely on the offsets between the
      pre-allocated pg_dirs to keep these cases easily identifiable.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      833be850
  17. 10 11月, 2020 1 次提交
  18. 15 10月, 2020 1 次提交
    • A
      arm64: mm: use single quantity to represent the PA to VA translation · 7bc1a0f9
      Ard Biesheuvel 提交于
      On arm64, the global variable memstart_addr represents the physical
      address of PAGE_OFFSET, and so physical to virtual translations or
      vice versa used to come down to simple additions or subtractions
      involving the values of PAGE_OFFSET and memstart_addr.
      
      When support for 52-bit virtual addressing was introduced, we had to
      deal with PAGE_OFFSET potentially being outside of the region that
      can be covered by the virtual range (as the 52-bit VA capable build
      needs to be able to run on systems that are only 48-bit VA capable),
      and for this reason, another translation was introduced, and recorded
      in the global variable physvirt_offset.
      
      However, if we go back to the original definition of memstart_addr,
      i.e., the physical address of PAGE_OFFSET, it turns out that there is
      no need for two separate translations: instead, we can simply subtract
      the size of the unaddressable VA space from memstart_addr to make the
      available physical memory appear in the 48-bit addressable VA region.
      
      This simplifies things, but also fixes a bug on KASLR builds, which
      may update memstart_addr later on in arm64_memblock_init(), but fails
      to update vmemmap and physvirt_offset accordingly.
      
      Fixes: 5383cc6e ("arm64: mm: Introduce vabits_actual")
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Link: https://lore.kernel.org/r/20201008153602.9467-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      7bc1a0f9
  19. 01 10月, 2020 1 次提交
  20. 14 9月, 2020 1 次提交
  21. 11 9月, 2020 2 次提交
    • A
      arm64/mm: Enable THP migration · 53fa117b
      Anshuman Khandual 提交于
      In certain page migration situations, a THP page can be migrated without
      being split into it's constituent subpages. This saves time required to
      split a THP and put it back together when required. But it also saves an
      wider address range translation covered by a single TLB entry, reducing
      future page fault costs.
      
      A previous patch changed platform THP helpers per generic memory semantics,
      clearing the path for THP migration support. This adds two more THP helpers
      required to create PMD migration swap entries. Now enable THP migration via
      ARCH_ENABLE_THP_MIGRATION.
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki Poulose <suzuki.poulose@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Link: https://lore.kernel.org/r/1599627183-14453-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      53fa117b
    • A
      arm64/mm: Change THP helpers to comply with generic MM semantics · b65399f6
      Anshuman Khandual 提交于
      pmd_present() and pmd_trans_huge() are expected to behave in the following
      manner during various phases of a given PMD. It is derived from a previous
      detailed discussion on this topic [1] and present THP documentation [2].
      
      pmd_present(pmd):
      
      - Returns true if pmd refers to system RAM with a valid pmd_page(pmd)
      - Returns false if pmd refers to a migration or swap entry
      
      pmd_trans_huge(pmd):
      
      - Returns true if pmd refers to system RAM and is a trans huge mapping
      
      -------------------------------------------------------------------------
      |	PMD states	|	pmd_present	|	pmd_trans_huge	|
      -------------------------------------------------------------------------
      |	Mapped		|	Yes		|	Yes		|
      -------------------------------------------------------------------------
      |	Splitting	|	Yes		|	Yes		|
      -------------------------------------------------------------------------
      |	Migration/Swap	|	No		|	No		|
      -------------------------------------------------------------------------
      
      The problem:
      
      PMD is first invalidated with pmdp_invalidate() before it's splitting. This
      invalidation clears PMD_SECT_VALID as below.
      
      PMD Split -> pmdp_invalidate() -> pmd_mkinvalid -> Clears PMD_SECT_VALID
      
      Once PMD_SECT_VALID gets cleared, it results in pmd_present() return false
      on the PMD entry. It will need another bit apart from PMD_SECT_VALID to re-
      affirm pmd_present() as true during the THP split process. To comply with
      above mentioned semantics, pmd_trans_huge() should also check pmd_present()
      first before testing presence of an actual transparent huge mapping.
      
      The solution:
      
      Ideally PMD_TYPE_SECT should have been used here instead. But it shares the
      bit position with PMD_SECT_VALID which is used for THP invalidation. Hence
      it will not be there for pmd_present() check after pmdp_invalidate().
      
      A new software defined PMD_PRESENT_INVALID (bit 59) can be set on the PMD
      entry during invalidation which can help pmd_present() return true and in
      recognizing the fact that it still points to memory.
      
      This bit is transient. During the split process it will be overridden by a
      page table page representing normal pages in place of erstwhile huge page.
      Other pmdp_invalidate() callers always write a fresh PMD value on the entry
      overriding this transient PMD_PRESENT_INVALID bit, which makes it safe.
      
      [1]: https://lkml.org/lkml/2018/10/17/231
      [2]: https://www.kernel.org/doc/Documentation/vm/transhuge.txtSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki Poulose <suzuki.poulose@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Link: https://lore.kernel.org/r/1599627183-14453-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      b65399f6
  22. 04 9月, 2020 3 次提交
    • S
      arm64: mte: Enable swap of tagged pages · 36943aba
      Steven Price 提交于
      When swapping pages out to disk it is necessary to save any tags that
      have been set, and restore when swapping back in. Make use of the new
      page flag (PG_ARCH_2, locally named PG_mte_tagged) to identify pages
      with tags. When swapping out these pages the tags are stored in memory
      and later restored when the pages are brought back in. Because shmem can
      swap pages back in without restoring the userspace PTE it is also
      necessary to add a hook for shmem.
      Signed-off-by: NSteven Price <steven.price@arm.com>
      [catalin.marinas@arm.com: move function prototypes to mte.h]
      [catalin.marinas@arm.com: drop '_tags' from arch_swap_restore_tags()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Will Deacon <will@kernel.org>
      36943aba
    • C
      arm64: mte: Add PROT_MTE support to mmap() and mprotect() · 9f341931
      Catalin Marinas 提交于
      To enable tagging on a memory range, the user must explicitly opt in via
      a new PROT_MTE flag passed to mmap() or mprotect(). Since this is a new
      memory type in the AttrIndx field of a pte, simplify the or'ing of these
      bits over the protection_map[] attributes by making MT_NORMAL index 0.
      
      There are two conditions for arch_vm_get_page_prot() to return the
      MT_NORMAL_TAGGED memory type: (1) the user requested it via PROT_MTE,
      registered as VM_MTE in the vm_flags, and (2) the vma supports MTE,
      decided during the mmap() call (only) and registered as VM_MTE_ALLOWED.
      
      arch_calc_vm_prot_bits() is responsible for registering the user request
      as VM_MTE. The newly introduced arch_calc_vm_flag_bits() sets
      VM_MTE_ALLOWED if the mapping is MAP_ANONYMOUS. An MTE-capable
      filesystem (RAM-based) may be able to set VM_MTE_ALLOWED during its
      mmap() file ops call.
      
      In addition, update VM_DATA_DEFAULT_FLAGS to allow mprotect(PROT_MTE) on
      stack or brk area.
      
      The Linux mmap() syscall currently ignores unknown PROT_* flags. In the
      presence of MTE, an mmap(PROT_MTE) on a file which does not support MTE
      will not report an error and the memory will not be mapped as Normal
      Tagged. For consistency, mprotect(PROT_MTE) will not report an error
      either if the memory range does not support MTE. Two subsequent patches
      in the series will propose tightening of this behaviour.
      Co-developed-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      9f341931
    • C
      arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE · 34bfeea4
      Catalin Marinas 提交于
      Pages allocated by the kernel are not guaranteed to have the tags
      zeroed, especially as the kernel does not (yet) use MTE itself. To
      ensure the user can still access such pages when mapped into its address
      space, clear the tags via set_pte_at(). A new page flag - PG_mte_tagged
      (PG_arch_2) - is used to track pages with valid allocation tags.
      
      Since the zero page is mapped as pte_special(), it won't be covered by
      the above set_pte_at() mechanism. Clear its tags during early MTE
      initialisation.
      Co-developed-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      34bfeea4
  23. 07 7月, 2020 1 次提交
  24. 17 6月, 2020 1 次提交
  25. 10 6月, 2020 2 次提交
    • M
      mm: consolidate pte_index() and pte_offset_*() definitions · 974b9b2c
      Mike Rapoport 提交于
      All architectures define pte_index() as
      
      	(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
      
      and all architectures define pte_offset_kernel() as an entry in the array
      of PTEs indexed by the pte_index().
      
      For the most architectures the pte_offset_kernel() implementation relies
      on the availability of pmd_page_vaddr() that converts a PMD entry value to
      the virtual address of the page containing PTEs array.
      
      Let's move x86 definitions of the PTE accessors to the generic place in
      <linux/pgtable.h> and then simply drop the respective definitions from the
      other architectures.
      
      The architectures that didn't provide pmd_page_vaddr() are updated to have
      that defined.
      
      The generic implementation of pte_offset_kernel() can be overridden by an
      architecture and alpha makes use of this because it has special ordering
      requirements for its version of pte_offset_kernel().
      
      [rppt@linux.ibm.com: v2]
        Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org
      [rppt@linux.ibm.com: update]
        Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org
      [rppt@linux.ibm.com: update]
        Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org
      [akpm@linux-foundation.org: fix x86 warning]
      [sfr@canb.auug.org.au: fix powerpc build]
        Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      974b9b2c
    • M
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport 提交于
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  26. 05 6月, 2020 1 次提交
    • M
      arm64: add support for folded p4d page tables · e9f63768
      Mike Rapoport 提交于
      Implement primitives necessary for the 4th level folding, add walks of p4d
      level where appropriate, replace 5level-fixup.h with pgtable-nop4d.h and
      remove __ARCH_USE_5LEVEL_HACK.
      
      [arnd@arndb.de: fix gcc-10 shift warning]
        Link: http://lkml.kernel.org/r/20200429185657.4085975-1-arnd@arndb.deSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert+renesas@glider.be>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200414153455.21744-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9f63768
  27. 04 6月, 2020 1 次提交
  28. 03 6月, 2020 1 次提交
    • C
      mm: enforce that vmap can't map pages executable · cca98e9f
      Christoph Hellwig 提交于
      To help enforcing the W^X protection don't allow remapping existing pages
      as executable.
      
      x86 bits from Peter Zijlstra, arm64 bits from Mark Rutland.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Mark Rutland <mark.rutland@arm.com>.
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Gao Xiang <xiang@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Michael Kelley <mikelley@microsoft.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Wei Liu <wei.liu@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: http://lkml.kernel.org/r/20200414131348.444715-20-hch@lst.deSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cca98e9f
  29. 28 4月, 2020 2 次提交
  30. 17 3月, 2020 1 次提交
    • D
      arm64: Basic Branch Target Identification support · 8ef8f360
      Dave Martin 提交于
      This patch adds the bare minimum required to expose the ARMv8.5
      Branch Target Identification feature to userspace.
      
      By itself, this does _not_ automatically enable BTI for any initial
      executable pages mapped by execve().  This will come later, but for
      now it should be possible to enable BTI manually on those pages by
      using mprotect() from within the target process.
      
      Other arches already using the generic mman.h are already using
      0x10 for arch-specific prot flags, so we use that for PROT_BTI
      here.
      
      For consistency, signal handler entry points in BTI guarded pages
      are required to be annotated as such, just like any other function.
      This blocks a relatively minor attack vector, but comforming
      userspace will have the annotations anyway, so we may as well
      enforce them.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8ef8f360
  31. 04 2月, 2020 1 次提交
    • S
      arm64: mm: add p?d_leaf() definitions · 8aa82df3
      Steven Price 提交于
      walk_page_range() is going to be allowed to walk page tables other than
      those of user space.  For this it needs to know when it has reached a
      'leaf' entry in the page tables.  This information will be provided by the
      p?d_leaf() functions/macros.
      
      For arm64, we already have p?d_sect() macros which we can reuse for
      p?d_leaf().
      
      pud_sect() is defined as a dummy function when CONFIG_PGTABLE_LEVELS < 3
      or CONFIG_ARM64_64K_PAGES is defined.  However when the kernel is
      configured this way then architecturally it isn't allowed to have a large
      page at this level, and any code using these page walking macros is
      implicitly relying on the page size/number of levels being the same as the
      kernel.  So it is safe to reuse this for p?d_leaf() as it is an
      architectural restriction.
      
      Link: http://lkml.kernel.org/r/20191218162402.45610-5-steven.price@arm.comSigned-off-by: NSteven Price <steven.price@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Alexandre Ghiti <alex@ghiti.fr>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: "Liang, Kan" <kan.liang@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Zong Li <zong.li@sifive.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8aa82df3
  32. 07 1月, 2020 1 次提交
    • C
      arm64: Revert support for execute-only user mappings · 24cecc37
      Catalin Marinas 提交于
      The ARMv8 64-bit architecture supports execute-only user permissions by
      clearing the PTE_USER and PTE_UXN bits, practically making it a mostly
      privileged mapping but from which user running at EL0 can still execute.
      
      The downside, however, is that the kernel at EL1 inadvertently reading
      such mapping would not trip over the PAN (privileged access never)
      protection.
      
      Revert the relevant bits from commit cab15ce6 ("arm64: Introduce
      execute-only page access permissions") so that PROT_EXEC implies
      PROT_READ (and therefore PTE_USER) until the architecture gains proper
      support for execute-only user mappings.
      
      Fixes: cab15ce6 ("arm64: Introduce execute-only page access permissions")
      Cc: <stable@vger.kernel.org> # 4.9.x-
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      24cecc37
  33. 07 11月, 2019 1 次提交