1. 10 6月, 2020 1 次提交
  2. 15 5月, 2020 1 次提交
  3. 12 5月, 2020 1 次提交
  4. 08 5月, 2020 1 次提交
  5. 05 5月, 2020 1 次提交
  6. 04 5月, 2020 2 次提交
  7. 01 4月, 2020 1 次提交
    • A
      arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch · 15cd0e67
      Amit Daniel Kachhap 提交于
      Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils.
      9.1+ versions of gcc inserts a section note .note.gnu.property but this
      can be used properly by binutils version greater than 2.33.1. If older
      binutils are used then the following warnings are generated,
      
      aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      
      This patch enables ARM64_PTR_AUTH when gcc and binutils versions are
      compatible with each other. Older gcc which do not insert such section
      continue to work as before.
      
      This scenario may not occur with clang as a recent commit 3b446c7d
      ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks
      binutils version lesser then 2.34.
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Suggested-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      [catalin.marinas@arm.com: slight adjustment to the comment]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      15cd0e67
  8. 20 3月, 2020 1 次提交
  9. 18 3月, 2020 4 次提交
    • W
      arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY · bf7f15c5
      Will Deacon 提交于
      Commit ab7876a9 ("arm64: elf: Enable BTI at exec based on ELF
      program properties") introduced the conditional selection of
      ARCH_USE_GNU_PROPERTY if BINFMT_ELF is enabled. With allnoconfig, this
      option is no longer selected and the arm64 arch_parse_elf_property()
      function clashes with the generic dummy implementation.
      
      Link: http://lkml.kernel.org/r/20200318082830.GA31312@willie-the-truck
      Fixes: ab7876a9 ("arm64: elf: Enable BTI at exec based on ELF program properties")
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bf7f15c5
    • K
      arm64: compile the kernel with ptrauth return address signing · 74afda40
      Kristina Martsenko 提交于
      Compile all functions with two ptrauth instructions: PACIASP in the
      prologue to sign the return address, and AUTIASP in the epilogue to
      authenticate the return address (from the stack). If authentication
      fails, the return will cause an instruction abort to be taken, followed
      by an oops and killing the task.
      
      This should help protect the kernel against attacks using
      return-oriented programming. As ptrauth protects the return address, it
      can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
      that it does not protect other parts of the stack.
      
      The new instructions are in the HINT encoding space, so on a system
      without ptrauth they execute as NOPs.
      
      CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
      guests, but also automatically builds the kernel with ptrauth
      instructions if the compiler supports it. If there is no compiler
      support, we do not warn that the kernel was built without ptrauth
      instructions.
      
      GCC 7 and 8 support the -msign-return-address option, while GCC 9
      deprecates that option and replaces it with -mbranch-protection. Support
      both options.
      
      Clang uses an external assembler hence this patch makes sure that the
      correct parameters (-march=armv8.3-a) are passed down to help it recognize
      the ptrauth instructions.
      
      Ftrace function tracer works properly with Ptrauth only when
      patchable-function-entry feature is present and is ensured by the
      Kconfig dependency.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
      Co-developed-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Cover leaf function, comments, Ftrace Kconfig]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      74afda40
    • A
      arm64: mask PAC bits of __builtin_return_address · 689eae42
      Amit Daniel Kachhap 提交于
      Functions like vmap() record how much memory has been allocated by their
      callers, and callers are identified using __builtin_return_address(). Once
      the kernel is using pointer-auth the return address will be signed. This
      means it will not match any kernel symbol, and will vary between threads
      even for the same caller.
      
      The output of /proc/vmallocinfo in this case may look like,
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4
      
      The above three 64bit values should be the same symbol name and not
      different LR values.
      
      Use the pre-processor to add logic to clear the PAC to
      __builtin_return_address() callers. This patch adds a new file
      asm/compiler.h and is transitively included via include/compiler_types.h on
      the compiler command line so it is guaranteed to be loaded and the users of
      this macro will not find a wrong version.
      
      Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for
      this purpose and added in this file. Existing macro ptrauth_user_pac_mask
      moved from asm/pointer_auth.h.
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      689eae42
    • K
      arm64: enable ptrauth earlier · 6982934e
      Kristina Martsenko 提交于
      When the kernel is compiled with pointer auth instructions, the boot CPU
      needs to start using address auth very early, so change the cpucap to
      account for this.
      
      Pointer auth must be enabled before we call C functions, because it is
      not possible to enter a function with pointer auth disabled and exit it
      with pointer auth enabled. Note, mismatches between architected and
      IMPDEF algorithms will still be caught by the cpufeature framework (the
      separate *_ARCH and *_IMP_DEF cpucaps).
      
      Note the change in behavior: if the boot CPU has address auth and a
      late CPU does not, then the late CPU is parked by the cpufeature
      framework. This is possible as kernel will only have NOP space intructions
      for PAC so such mismatched late cpu will silently ignore those
      instructions in C functions. Also, if the boot CPU does not have address
      auth and the late CPU has then the late cpu will still boot but with
      ptrauth feature disabled.
      
      Leave generic authentication as a "system scope" cpucap for now, since
      initially the kernel will only use address authentication.
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Re-worked ptrauth setup logic, comments]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6982934e
  10. 17 3月, 2020 2 次提交
  11. 07 3月, 2020 1 次提交
  12. 04 3月, 2020 1 次提交
    • A
      arm64/mm: Enable memory hot remove · bbd6ec60
      Anshuman Khandual 提交于
      The arch code for hot-remove must tear down portions of the linear map and
      vmemmap corresponding to memory being removed. In both cases the page
      tables mapping these regions must be freed, and when sparse vmemmap is in
      use the memory backing the vmemmap must also be freed.
      
      This patch adds unmap_hotplug_range() and free_empty_tables() helpers which
      can be used to tear down either region and calls it from vmemmap_free() and
      ___remove_pgd_mapping(). The free_mapped argument determines whether the
      backing memory will be freed.
      
      It makes two distinct passes over the kernel page table. In the first pass
      with unmap_hotplug_range() it unmaps, invalidates applicable TLB cache and
      frees backing memory if required (vmemmap) for each mapped leaf entry. In
      the second pass with free_empty_tables() it looks for empty page table
      sections whose page table page can be unmapped, TLB invalidated and freed.
      
      While freeing intermediate level page table pages bail out if any of its
      entries are still valid. This can happen for partially filled kernel page
      table either from a previously attempted failed memory hot add or while
      removing an address range which does not span the entire page table page
      range.
      
      The vmemmap region may share levels of table with the vmalloc region.
      There can be conflicts between hot remove freeing page table pages with
      a concurrent vmalloc() walking the kernel page table. This conflict can
      not just be solved by taking the init_mm ptl because of existing locking
      scheme in vmalloc(). So free_empty_tables() implements a floor and ceiling
      method which is borrowed from user page table tear with free_pgd_range()
      which skips freeing page table pages if intermediate address range is not
      aligned or maximum floor-ceiling might not own the entire page table page.
      
      Boot memory on arm64 cannot be removed. Hence this registers a new memory
      hotplug notifier which prevents boot memory offlining and it's removal.
      
      While here update arch_add_memory() to handle __add_pages() failures by
      just unmapping recently added kernel linear mapping. Now enable memory hot
      remove on arm64 platforms by default with ARCH_ENABLE_MEMORY_HOTREMOVE.
      
      This implementation is overall inspired from kernel page table tear down
      procedure on X86 architecture and user page table tear down method.
      
      [Mike and Catalin added P4D page table level support]
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bbd6ec60
  13. 27 2月, 2020 1 次提交
  14. 20 2月, 2020 1 次提交
    • F
      arm64: Remove TIF_NOHZ · 320a4fc2
      Frederic Weisbecker 提交于
      The syscall slow path is spuriously invoked when context tracking is
      activated while the entry code calls context tracking from fast path.
      
      Remove that overhead and the unused flag itself while at it.
      Acked-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      320a4fc2
  15. 18 2月, 2020 2 次提交
  16. 14 2月, 2020 1 次提交
    • F
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker 提交于
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  17. 04 2月, 2020 2 次提交
  18. 22 1月, 2020 2 次提交
  19. 21 1月, 2020 1 次提交
  20. 16 1月, 2020 4 次提交
  21. 15 1月, 2020 2 次提交
    • M
      arm64: Add initial support for E0PD · 3e6c69a0
      Mark Brown 提交于
      Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
      based security issues by ensuring that the kernel is not mapped when
      userspace is running but this approach is expensive and is incompatible
      with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
      alternative to this which ensures that accesses from userspace to the
      kernel's half of the memory map to always fault with constant time,
      preventing timing attacks without requiring constant unmapping and
      remapping or preventing legitimate accesses.
      
      Currently this feature will only be enabled if all CPUs in the system
      support E0PD, if some CPUs do not support the feature at boot time then
      the feature will not be enabled and in the unlikely event that a late
      CPU is the first CPU to lack the feature then we will reject that CPU.
      
      This initial patch does not yet integrate with KPTI, this will be dealt
      with in followup patches.  Ideally we could ensure that by default we
      don't use KPTI on CPUs where E0PD is present.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [will: Fixed typo in Kconfig text]
      Signed-off-by: NWill Deacon <will@kernel.org>
      3e6c69a0
    • C
      arm64: Move the LSE gas support detection to Kconfig · 395af861
      Catalin Marinas 提交于
      As the Kconfig syntax gained support for $(as-instr) tests, move the LSE
      gas support detection from Makefile to the main arm64 Kconfig and remove
      the additional CONFIG_AS_LSE definition and check.
      
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Tested-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      395af861
  22. 09 1月, 2020 1 次提交
  23. 07 1月, 2020 1 次提交
  24. 13 12月, 2019 1 次提交
  25. 12 12月, 2019 1 次提交
  26. 08 12月, 2019 1 次提交
  27. 25 11月, 2019 1 次提交
  28. 17 11月, 2019 1 次提交
    • A
      int128: move __uint128_t compiler test to Kconfig · c12d3362
      Ard Biesheuvel 提交于
      In order to use 128-bit integer arithmetic in C code, the architecture
      needs to have declared support for it by setting ARCH_SUPPORTS_INT128,
      and it requires a version of the toolchain that supports this at build
      time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test
      whether __SIZEOF_INT128__ is defined, since this is only the case for
      compilers that can support 128-bit integers.
      
      Let's fold this additional test into the Kconfig declaration of
      ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles,
      e.g., to decide whether a certain object needs to be included in the
      first place.
      
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c12d3362