1. 18 3月, 2020 2 次提交
    • A
      arm64: mask PAC bits of __builtin_return_address · 689eae42
      Amit Daniel Kachhap 提交于
      Functions like vmap() record how much memory has been allocated by their
      callers, and callers are identified using __builtin_return_address(). Once
      the kernel is using pointer-auth the return address will be signed. This
      means it will not match any kernel symbol, and will vary between threads
      even for the same caller.
      
      The output of /proc/vmallocinfo in this case may look like,
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4
      
      The above three 64bit values should be the same symbol name and not
      different LR values.
      
      Use the pre-processor to add logic to clear the PAC to
      __builtin_return_address() callers. This patch adds a new file
      asm/compiler.h and is transitively included via include/compiler_types.h on
      the compiler command line so it is guaranteed to be loaded and the users of
      this macro will not find a wrong version.
      
      Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for
      this purpose and added in this file. Existing macro ptrauth_user_pac_mask
      moved from asm/pointer_auth.h.
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      689eae42
    • K
      arm64: enable ptrauth earlier · 6982934e
      Kristina Martsenko 提交于
      When the kernel is compiled with pointer auth instructions, the boot CPU
      needs to start using address auth very early, so change the cpucap to
      account for this.
      
      Pointer auth must be enabled before we call C functions, because it is
      not possible to enter a function with pointer auth disabled and exit it
      with pointer auth enabled. Note, mismatches between architected and
      IMPDEF algorithms will still be caught by the cpufeature framework (the
      separate *_ARCH and *_IMP_DEF cpucaps).
      
      Note the change in behavior: if the boot CPU has address auth and a
      late CPU does not, then the late CPU is parked by the cpufeature
      framework. This is possible as kernel will only have NOP space intructions
      for PAC so such mismatched late cpu will silently ignore those
      instructions in C functions. Also, if the boot CPU does not have address
      auth and the late CPU has then the late cpu will still boot but with
      ptrauth feature disabled.
      
      Leave generic authentication as a "system scope" cpucap for now, since
      initially the kernel will only use address authentication.
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Re-worked ptrauth setup logic, comments]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6982934e
  2. 04 2月, 2020 2 次提交
  3. 22 1月, 2020 2 次提交
  4. 21 1月, 2020 1 次提交
  5. 16 1月, 2020 4 次提交
  6. 15 1月, 2020 2 次提交
    • M
      arm64: Add initial support for E0PD · 3e6c69a0
      Mark Brown 提交于
      Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
      based security issues by ensuring that the kernel is not mapped when
      userspace is running but this approach is expensive and is incompatible
      with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
      alternative to this which ensures that accesses from userspace to the
      kernel's half of the memory map to always fault with constant time,
      preventing timing attacks without requiring constant unmapping and
      remapping or preventing legitimate accesses.
      
      Currently this feature will only be enabled if all CPUs in the system
      support E0PD, if some CPUs do not support the feature at boot time then
      the feature will not be enabled and in the unlikely event that a late
      CPU is the first CPU to lack the feature then we will reject that CPU.
      
      This initial patch does not yet integrate with KPTI, this will be dealt
      with in followup patches.  Ideally we could ensure that by default we
      don't use KPTI on CPUs where E0PD is present.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [will: Fixed typo in Kconfig text]
      Signed-off-by: NWill Deacon <will@kernel.org>
      3e6c69a0
    • C
      arm64: Move the LSE gas support detection to Kconfig · 395af861
      Catalin Marinas 提交于
      As the Kconfig syntax gained support for $(as-instr) tests, move the LSE
      gas support detection from Makefile to the main arm64 Kconfig and remove
      the additional CONFIG_AS_LSE definition and check.
      
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Tested-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      395af861
  7. 09 1月, 2020 1 次提交
  8. 07 1月, 2020 1 次提交
  9. 13 12月, 2019 1 次提交
  10. 12 12月, 2019 1 次提交
  11. 08 12月, 2019 1 次提交
  12. 25 11月, 2019 1 次提交
  13. 17 11月, 2019 1 次提交
    • A
      int128: move __uint128_t compiler test to Kconfig · c12d3362
      Ard Biesheuvel 提交于
      In order to use 128-bit integer arithmetic in C code, the architecture
      needs to have declared support for it by setting ARCH_SUPPORTS_INT128,
      and it requires a version of the toolchain that supports this at build
      time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test
      whether __SIZEOF_INT128__ is defined, since this is only the case for
      compilers that can support 128-bit integers.
      
      Let's fold this additional test into the Kconfig declaration of
      ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles,
      e.g., to decide whether a certain object needs to be included in the
      first place.
      
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c12d3362
  14. 14 11月, 2019 1 次提交
  15. 12 11月, 2019 1 次提交
  16. 11 11月, 2019 1 次提交
    • C
      dma-direct: provide mmap and get_sgtable method overrides · 34dc0ea6
      Christoph Hellwig 提交于
      For dma-direct we know that the DMA address is an encoding of the
      physical address that we can trivially decode.  Use that fact to
      provide implementations that do not need the arch_dma_coherent_to_pfn
      architecture hook.  Note that we still can only support mmap of
      non-coherent memory only if the architecture provides a way to set an
      uncached bit in the page tables.  This must be true for architectures
      that use the generic remap helpers, but other architectures can also
      manually select it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NMax Filippov <jcmvbkbc@gmail.com>
      34dc0ea6
  17. 06 11月, 2019 1 次提交
    • T
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe 提交于
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
  18. 26 10月, 2019 2 次提交
  19. 14 10月, 2019 1 次提交
  20. 08 10月, 2019 1 次提交
  21. 07 10月, 2019 2 次提交
    • W
      arm64: Kconfig: Make CONFIG_COMPAT_VDSO a proper Kconfig option · 7c4791c9
      Will Deacon 提交于
      CONFIG_COMPAT_VDSO is defined by passing '-DCONFIG_COMPAT_VDSO' to the
      compiler when the generic compat vDSO code is in use. It's much cleaner
      and simpler to expose this as a proper Kconfig option (like x86 does),
      so do that and remove the bodge.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      7c4791c9
    • V
      arm64: vdso32: Fix broken compat vDSO build warnings · e0de01aa
      Vincenzo Frascino 提交于
      The .config file and the generated include/config/auto.conf can
      end up out of sync after a set of commands since
      CONFIG_CROSS_COMPILE_COMPAT_VDSO is not updated correctly.
      
      The sequence can be reproduced as follows:
      
      $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig
      [...]
      $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- menuconfig
      [set CONFIG_CROSS_COMPILE_COMPAT_VDSO="arm-linux-gnueabihf-"]
      $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
      
      Which results in:
      
      arch/arm64/Makefile:62: CROSS_COMPILE_COMPAT not defined or empty,
      the compat vDSO will not be built
      
      even though the compat vDSO has been built:
      
      $ file arch/arm64/kernel/vdso32/vdso.so
      arch/arm64/kernel/vdso32/vdso.so: ELF 32-bit LSB pie executable, ARM,
      EABI5 version 1 (SYSV), dynamically linked,
      BuildID[sha1]=c67f6c786f2d2d6f86c71f708595594aa25247f6, stripped
      
      A similar case that involves changing the configuration parameter
      multiple times can be reconducted to the same family of problems.
      
      Remove the use of CONFIG_CROSS_COMPILE_COMPAT_VDSO altogether and
      instead rely on the cross-compiler prefix coming from the environment
      via CROSS_COMPILE_COMPAT, much like we do for the rest of the kernel.
      
      Cc: Will Deacon <will@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reported-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      e0de01aa
  22. 25 9月, 2019 2 次提交
  23. 18 9月, 2019 1 次提交
  24. 30 8月, 2019 1 次提交
    • W
      arm64: lse: Make ARM64_LSE_ATOMICS depend on JUMP_LABEL · b32baf91
      Will Deacon 提交于
      Support for LSE atomic instructions (CONFIG_ARM64_LSE_ATOMICS) relies on
      a static key to select between the legacy LL/SC implementation which is
      available on all arm64 CPUs and the super-duper LSE implementation which
      is available on CPUs implementing v8.1 and later.
      
      Unfortunately, when building a kernel with CONFIG_JUMP_LABEL disabled
      (e.g. because the toolchain doesn't support 'asm goto'), the static key
      inside the atomics code tries to use atomics itself. This results in a
      mess of circular includes and a build failure:
      
      In file included from ./arch/arm64/include/asm/lse.h:11,
                       from ./arch/arm64/include/asm/atomic.h:16,
                       from ./include/linux/atomic.h:7,
                       from ./include/asm-generic/bitops/atomic.h:5,
                       from ./arch/arm64/include/asm/bitops.h:26,
                       from ./include/linux/bitops.h:19,
                       from ./include/linux/kernel.h:12,
                       from ./include/asm-generic/bug.h:18,
                       from ./arch/arm64/include/asm/bug.h:26,
                       from ./include/linux/bug.h:5,
                       from ./include/linux/page-flags.h:10,
                       from kernel/bounds.c:10:
      ./include/linux/jump_label.h: In function ‘static_key_count’:
      ./include/linux/jump_label.h:254:9: error: implicit declaration of function ‘atomic_read’ [-Werror=implicit-function-declaration]
        return atomic_read(&key->enabled);
               ^~~~~~~~~~~
      
      [ ... more of the same ... ]
      
      Since LSE atomic instructions are not critical to the operation of the
      kernel, make them depend on JUMP_LABEL at compile time.
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b32baf91
  25. 29 8月, 2019 1 次提交
    • C
      dma-mapping: remove arch_dma_mmap_pgprot · 419e2f18
      Christoph Hellwig 提交于
      arch_dma_mmap_pgprot is used for two things:
      
       1) to override the "normal" uncached page attributes for mapping
          memory coherent to devices that can't snoop the CPU caches
       2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older
          arm systems and some mips platforms
      
      Replace one with the pgprot_dmacoherent macro that is already provided
      by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE
      handling to common code with an explicit arch opt-in.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      Acked-by: Paul Burton <paul.burton@mips.com>		# mips
      419e2f18
  26. 22 8月, 2019 1 次提交
  27. 20 8月, 2019 1 次提交
  28. 09 8月, 2019 2 次提交
    • S
      arm64: mm: Introduce 52-bit Kernel VAs · b6d00d47
      Steve Capper 提交于
      Most of the machinery is now in place to enable 52-bit kernel VAs that
      are detectable at boot time.
      
      This patch adds a Kconfig option for 52-bit user and kernel addresses
      and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
      physvirt_offset and vmemmap at early boot.
      
      To simplify things this patch also removes the 52-bit user/48-bit kernel
      kconfig option.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b6d00d47
    • S
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper 提交于
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      6bd1d0be
  29. 07 8月, 2019 1 次提交