1. 05 6月, 2021 1 次提交
  2. 11 5月, 2021 1 次提交
  3. 29 3月, 2021 2 次提交
    • M
      arm64: setup: name `tcr` register · 5cd6fa6d
      Mark Rutland 提交于
      In __cpu_setup we conditionally manipulate the TCR_EL1 value in x10
      after previously using x10 as a scratch register for unrelated temporary
      variables.
      
      To make this a bit clearer, let's move the TCR_EL1 value into a named
      register `tcr`. To simplify the register allocation, this is placed in
      the highest available caller-saved scratch register, tcr.
      
      Following the example of `mair`, we initialise the register with the
      default value prior to any feature discovery, and write it to MAIR_EL1
      after all feature discovery is complete, which allows us to simplify the
      featuere discovery code.
      
      The existing `mte_tcr` register is no longer needed, and is replaced by
      the use of x10 as a temporary, matching the rest of the MTE feature
      discovery assembly in __cpu_setup. As x20 is no longer used, the
      function is now AAPCS compliant, as we've generally aimed for in our
      assembly functions.
      
      There should be no functional change as as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210326180137.43119-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5cd6fa6d
    • M
      arm64: setup: name `mair` register · 776e49af
      Mark Rutland 提交于
      In __cpu_setup we conditionally manipulate the MAIR_EL1 value in x5
      before later reusing x5 as a scratch register for unrelated temporary
      variables.
      
      To make this a bit clearer, let's move the MAIR_EL1 value into a named
      register `mair`. To simplify the register allocation, this is placed in
      the highest available caller-saved scratch register, x17. As it is no
      longer clobbered by other usage, we can write the value to MAIR_EL1 at
      the end of the function as we do for TCR_EL1 rather than part-way though
      feature discovery.
      
      There should be no functional change as as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210326180137.43119-2-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      776e49af
  4. 08 2月, 2021 2 次提交
  5. 06 1月, 2021 1 次提交
  6. 23 12月, 2020 1 次提交
  7. 03 12月, 2020 1 次提交
  8. 26 11月, 2020 1 次提交
  9. 11 11月, 2020 1 次提交
    • M
      arm64: consistently use reserved_pg_dir · 833be850
      Mark Rutland 提交于
      Depending on configuration options and specific code paths, we either
      use the empty_zero_page or the configuration-dependent reserved_ttbr0
      as a reserved value for TTBR{0,1}_EL1.
      
      To simplify this code, let's always allocate and use the same
      reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated
      (and hence pre-zeroed), and is also marked as read-only in the kernel
      Image mapping.
      
      Keeping this separate from the empty_zero_page potentially helps with
      robustness as the empty_zero_page is used in a number of cases where a
      failure to map it read-only could allow it to become corrupted.
      
      The (presently unused) swapper_pg_end symbol is also removed, and
      comments are added wherever we rely on the offsets between the
      pre-allocated pg_dirs to keep these cases easily identifiable.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      833be850
  10. 04 9月, 2020 2 次提交
  11. 10 6月, 2020 2 次提交
    • M
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport 提交于
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65fddcfc
    • M
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport 提交于
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  12. 15 5月, 2020 1 次提交
  13. 28 4月, 2020 2 次提交
    • M
      arm64: simplify ptrauth initialization · 62a679cb
      Mark Rutland 提交于
      Currently __cpu_setup conditionally initializes the address
      authentication keys and enables them in SCTLR_EL1, doing so differently
      for the primary CPU and secondary CPUs, and skipping this work for CPUs
      returning from an idle state. For the latter case, cpu_do_resume
      restores the keys and SCTLR_EL1 value after the MMU has been enabled.
      
      This flow is rather difficult to follow, so instead let's move the
      primary and secondary CPU initialization into their respective boot
      paths. By following the example of cpu_do_resume and doing so once the
      MMU is enabled, we can always initialize the keys from the values in
      thread_struct, and avoid the machinery necessary to pass the keys in
      secondary_data or open-coding initialization for the boot CPU.
      
      This means we perform an additional RMW of SCTLR_EL1, but we already do
      this in the cpu_do_resume path, and for other features in cpufeature.c,
      so this isn't a major concern in a bringup path. Note that even while
      the enable bits are clear, the key registers are accessible.
      
      As this now renders the argument to __cpu_setup redundant, let's also
      remove that entirely. Future extensions can follow a similar approach to
      initialize values that differ for primary/secondary CPUs.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      62a679cb
    • M
      arm64: remove ptrauth_keys_install_kernel sync arg · d0055da5
      Mark Rutland 提交于
      The 'sync' argument to ptrauth_keys_install_kernel macro is somewhat
      opaque at callsites, so instead lets have regular and _nosync variants
      of the macro to make this a little more obvious.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20200423101606.37601-2-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      d0055da5
  14. 24 3月, 2020 1 次提交
  15. 18 3月, 2020 4 次提交
  16. 07 3月, 2020 1 次提交
    • I
      arm64: trap to EL1 accesses to AMU counters from EL0 · 87a1f063
      Ionela Voinescu 提交于
      The activity monitors extension is an optional extension introduced
      by the ARMv8.4 CPU architecture. In order to access the activity
      monitors counters safely, if desired, the kernel should detect the
      presence of the extension through the feature register, and mediate
      the access.
      
      Therefore, disable direct accesses to activity monitors counters
      from EL0 (userspace) and trap them to EL1 (kernel).
      
      To be noted that the ARM64_AMU_EXTN kernel config does not have an
      effect on this code. Given that the amuserenr_el0 resets to an
      UNKNOWN value, setting the trap of EL0 accesses to EL1 is always
      attempted for safety and security considerations. Therefore firmware
      should still ensure accesses to AMU registers are not trapped in
      EL2/EL3 as this code cannot be bypassed if the CPU implements the
      Activity Monitors Unit.
      Signed-off-by: NIonela Voinescu <ionela.voinescu@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NValentin Schneider <valentin.schneider@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      87a1f063
  17. 27 2月, 2020 1 次提交
    • M
      arm64: mm: convert cpu_do_switch_mm() to C · 25b92693
      Mark Rutland 提交于
      There's no reason that cpu_do_switch_mm() needs to be written as an
      assembly function, and having it as a C function would make it easier to
      maintain.
      
      This patch converts cpu_do_switch_mm() to C, removing code that this
      change makes redundant (e.g. the mmid macro). Since the header comment
      was stale and the prototype now implies all the necessary information,
      this comment is removed. The 'pgd_phys' argument is made a phys_addr_t
      to match the return type of virt_to_phys().
      
      At the same time, post_ttbr_update_workaround() is updated to use
      IS_ENABLED(), which allows the compiler to figure out it can elide calls
      for !CONFIG_CAVIUM_ERRATUM_27456 builds.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      [catalin.marinas@arm.com: change comments from asm-style to C-style]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      25b92693
  18. 17 1月, 2020 2 次提交
  19. 08 1月, 2020 1 次提交
    • M
      arm64: mm: Use modern annotations for assembly functions · f4659254
      Mark Brown 提交于
      In an effort to clarify and simplify the annotation of assembly functions
      in the kernel new macros have been introduced. These replace ENTRY and
      ENDPROC and also add a new annotation for static functions which previously
      had no ENTRY equivalent. Update the annotations in the mm code to the
      new macros. Even the functions called from non-standard environments
      like idmap have no special requirements on their environments so can be
      treated like regular functions.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      f4659254
  20. 28 8月, 2019 1 次提交
    • M
      arm64: kpti: ensure patched kernel text is fetched from PoU · f32c7a8e
      Mark Rutland 提交于
      While the MMUs is disabled, I-cache speculation can result in
      instructions being fetched from the PoC. During boot we may patch
      instructions (e.g. for alternatives and jump labels), and these may be
      dirty at the PoU (and stale at the PoC).
      
      Thus, while the MMU is disabled in the KPTI pagetable fixup code we may
      load stale instructions into the I-cache, potentially leading to
      subsequent crashes when executing regions of code which have been
      modified at runtime.
      
      Similarly to commit:
      
        8ec41987 ("arm64: mm: ensure patched kernel text is fetched from PoU")
      
      ... we can invalidate the I-cache after enabling the MMU to prevent such
      issues.
      
      The KPTI pagetable fixup code itself should be clean to the PoC per the
      boot protocol, so no maintenance is required for this code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      f32c7a8e
  21. 09 8月, 2019 3 次提交
  22. 19 6月, 2019 1 次提交
  23. 09 4月, 2019 1 次提交
  24. 01 3月, 2019 1 次提交
    • Z
      arm64: Add workaround for Fujitsu A64FX erratum 010001 · 3e32131a
      Zhang Lei 提交于
      On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
      an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
      a specific hardware condition when a load/store instruction performs an
      address translation. Any load/store instruction, except non-fault access
      including Armv8 and SVE might cause this undefined fault.
      
      The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
      is enabled to mitigate timing attacks against KASLR where the kernel
      address space could be probed using the FFR and suppressed fault on
      SVE loads.
      
      Since this erratum causes spurious exceptions, which may corrupt
      the exception registers, we clear the TCR_ELx.NFDx=1 bits when
      booting on an affected CPU.
      Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com>
      [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
       and always disabled the NFDx bits on affected CPUs]
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3e32131a
  25. 06 2月, 2019 1 次提交
  26. 29 12月, 2018 1 次提交
  27. 11 12月, 2018 3 次提交
    • W
      arm64: Kconfig: Re-jig CONFIG options for 52-bit VA · 68d23da4
      Will Deacon 提交于
      Enabling 52-bit VAs for userspace is pretty confusing, since it requires
      you to select "48-bit" virtual addressing in the Kconfig.
      
      Rework the logic so that 52-bit user virtual addressing is advertised in
      the "Virtual address space size" choice, along with some help text to
      describe its interaction with Pointer Authentication. The EXPERT-only
      option to force all user mappings to the 52-bit range is then made
      available immediately below the VA size selection.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      68d23da4
    • S
      arm64: mm: introduce 52-bit userspace support · 67e7fdfc
      Steve Capper 提交于
      On arm64 there is optional support for a 52-bit virtual address space.
      To exploit this one has to be running with a 64KB page size and be
      running on hardware that supports this.
      
      For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
      some changes are needed to support a 52-bit userspace:
       * TCR_EL1.T0SZ needs to be 12 instead of 16,
       * TASK_SIZE needs to reflect the new size.
      
      This patch implements the above when the support for 52-bit VAs is
      detected at early boot time.
      
      On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
      well as userspace, TTBR0_EL1 controls:
       * The identity mapping,
       * EFI runtime code.
      
      It is possible to run a kernel with an identity mapping that has a
      larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
      would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
      52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
      12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
      disabled.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67e7fdfc
    • S
      arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD · e842dfb5
      Steve Capper 提交于
      Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
      entries (for the 48-bit case) to 1024 entries. This quantity,
      PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
      to a given virtual address, addr:
      
      pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)
      
      Userspace addresses are prefixed by 0's, so for a 48-bit userspace
      address, uva, the following is true:
      (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words, a 48-bit userspace address will have the same pgd_index
      when using PTRS_PER_PGD = 64 and 1024.
      
      Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
      kva, we have the following inequality:
      (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words a 48-bit kernel virtual address will have a different
      pgd_index when using PTRS_PER_PGD = 64 and 1024.
      
      If, however, we note that:
      kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b)
      and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)
      
      We can consider:
      (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1)
       = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F	// "lower" cancels out
       = 0x3C0
      
      In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
      provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
      running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).
      
      For kernel configuration where 52-bit userspace VAs are possible, this
      patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
      52-bit value.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      [will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e842dfb5